parent_paper_title,parent_paper_arxiv_id,citation_shorthand,raw_citation_text,cited_paper_title,cited_paper_arxiv_link,cited_paper_abstract,has_metadata,is_arxiv_paper,bib_paper_authors,bib_paper_year,bib_paper_month,bib_paper_url,bib_paper_doi,bib_paper_journal,original_title,search_res_title,search_res_url,search_res_content TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,NBERw21340,\cite{NBERw21340},Effective Policy for Reducing Inequality? The Earned Income Tax Credit and the Distribution of Income,,,True,False,"Hoynes, Hilary W and Patel, Ankur J",2015.0,July,http://www.nber.org/papers/w21340,10.3386/w21340,,Effective Policy for Reducing Inequality? The Earned Income Tax Credit and the Distribution of Income,Effective Policy for Reducing Inequality? The Earned Income,https://ideas.repec.org/p/nbr/nberwo/21340.html,Our results show that a policy-induced $1000 increase in the EITC leads to a 7.3 percentage point increase in employment and a 9.4 percentage point reduction TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,NBERw21211,\cite{NBERw21211},The Earned Income Tax Credit (EITC),,,True,False,"Nichols, Austin and Rothstein, Jesse",2015.0,May,http://www.nber.org/papers/w21211,10.3386/w21211,,The Earned Income Tax Credit (EITC),What is the earned income tax credit? - Tax Policy Center,https://taxpolicycenter.org/briefing-book/what-earned-income-tax-credit,The earned income tax credit (EITC) provides substantial support to low- and moderate-income working parents who claim a qualifying child. TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,Foo2019ProcessAC,\cite{Foo2019ProcessAC},Process and Critical Approaches to Solving the Systemic Climate Change Governance Problem,,,True,False,Check Woo Foo,2019.0,,https://api.semanticscholar.org/CorpusID:235319207,,Politics \& Energy eJournal,Process and Critical Approaches to Solving the Systemic Climate Change Governance Problem,Process and Critical Approaches to Solving the Systemic Climate ...,https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3608501,"The most important and urgent task, besides avoiding nuclear war, is abatement of the existential threat of systemic climate change," TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,Patjoshi2015DesignAD,\cite{Patjoshi2015DesignAD},Design and Development of Advanced Control strategies for Power Quality Enhancement at Distribution Level,,,True,False,Rajesh Kumar Patjoshi,2015.0,,https://api.semanticscholar.org/CorpusID:112918597,,,Design and Development of Advanced Control strategies for Power Quality Enhancement at Distribution Level,(PDF) Advanced Control Strategies for UPQC to Improve ...,https://www.researchgate.net/publication/279289697_Advanced_Control_Strategies_for_UPQC_to_Improve_Power_Quality_of_Power_Distribution_Systems,"PDF | On Jul 2, 2014, Quoc Nam Trinh published Advanced Control Strategies for UPQC to Improve Power Quality of Power Distribution Systems" TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,10.1257/jep.25.4.165,\cite{10.1257/jep.25.4.165},The Case for a Progressive Tax: From Basic Research to Policy Recommendations,,,True,False,"Diamond, Peter and Saez, Emmanuel",2011.0,December,https://www.aeaweb.org/articles?id=10.1257/jep.25.4.165,10.1257/jep.25.4.165,Journal of Economic Perspectives,The Case for a Progressive Tax: From Basic Research to Policy Recommendations,The Case for a Progressive Tax,https://economics.mit.edu/sites/default/files/2022-09/jep.25.4.165.pdf,"Therefore, optimal income tax theory is fi rst a normative theory that shows how a social welfare objective combines with constraints arising from theory that shows how a social welfare objective combines with constraints arising from limits on resources and behavioral responses to taxation in order to derive specifi c limits on resources and behavioral responses to taxation in order to derive specifi c The Case for a Progressive Tax: From Basic Research to Policy Recommendations † ■ ■ Peter Diamond is Professor Emeritus of Economics, Massachusetts Institute of Tech-nology, Cambridge Massachusetts. doi=10.1257/jep.25.4.165 Peter Diamond and Emmanuel Saez 166 Journal of Economic Perspectives tax policy recommendations. In addition, optimal income tax theory can be used to tax policy recommendations." TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,10.2307/2296779,\cite{10.2307/2296779},An Exploration in the Theory of Optimum Income Taxation12,,,True,False,"Mirrlees, J. A.",1971.0,04,https://doi.org/10.2307/2296779,10.2307/2296779,The Review of Economic Studies,An Exploration in the Theory of Optimum Income Taxation12,Exploration in the Theory of Optimum Income Taxation12,https://academic.oup.com/restud/article-abstract/38/2/175/1527903,"by JA Mirrlees · 1971 · Cited by 7415 — J. A. Mirrlees; An Exploration in the Theory of Optimum Income Taxation12, The Review of Economic Studies, Volume 38, Issue 2, 1 April 1971, Pages 175–208," TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27,\cite{RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27},Optimal Taxation and Public Production: I--Production Efficiency,,,True,False,"Diamond, Peter and Mirrlees, James",1971.0,,https://EconPapers.repec.org/RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27,,American Economic Review,Optimal Taxation and Public Production: I--Production Efficiency,[PDF] Optimal Taxation and Public Production I: Production Efficiency,http://hassler-j.iies.su.se/Courses/DynPubFin/Papers/DiamondMirrlees.pdf,Theories of optimal production in a planned economy have usually assumed that the tax system can allow the govern- ment to achieve any desired redistribution of TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,10.1111/1467-937X.00166,\cite{10.1111/1467-937X.00166},Using Elasticities to Derive Optimal Income Tax Rates,,,True,False,"Saez, Emmanuel",2001.0,01,https://doi.org/10.1111/1467-937X.00166,10.1111/1467-937X.00166,The Review of Economic Studies,Using Elasticities to Derive Optimal Income Tax Rates,Using Elasticities to Derive Optimal Income Tax Rates,https://academic.oup.com/restud/article/68/1/205/1568609,by E Saez · 2001 · Cited by 1885 — This paper derives optimal income tax formulas using compensated and uncompensated elasticities of earnings with respect to tax rates. TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,10.1257/pol.6.1.230,\cite{10.1257/pol.6.1.230},Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities,,,True,False,"Piketty, Thomas and Saez, Emmanuel and Stantcheva, Stefanie",2014.0,February,https://www.aeaweb.org/articles?id=10.1257/pol.6.1.230,10.1257/pol.6.1.230,American Economic Journal: Economic Policy,Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities,Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities,https://www.nber.org/papers/w17616,This paper presents a model of optimal labor income taxation where top incomes respond to marginal tax rates through three channels. TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,10.1257/pol.20180033,\cite{10.1257/pol.20180033},Optimal Income Taxation with Unemployment and Wage Responses: A Sufficient Statistics Approach,,,True,False,"Kroft, Kory and Kucko, Kavan and Lehmann, Etienne and Schmieder, Johannes",2020.0,February,https://www.aeaweb.org/articles?id=10.1257/pol.20180033,10.1257/pol.20180033,American Economic Journal: Economic Policy,Optimal Income Taxation with Unemployment and Wage Responses: A Sufficient Statistics Approach,Optimal Income Taxation with Unemployment and Wage Responses,https://www.aeaweb.org/articles?id=10.1257/pol.20180033,We derive a sufficient statistics tax formula in a model that incorporates unemployment and endogenous wages to study the shape of the optimal income tax. Key TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,zheng2020aieconomistimprovingequality,\cite{zheng2020aieconomistimprovingequality},"The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies",http://arxiv.org/abs/2004.13332v1,"Tackling real-world socio-economic challenges requires designing and testing economic policies. However, this is hard in practice, due to a lack of appropriate (micro-level) economic data and limited opportunity to experiment. In this work, we train social planners that discover tax policies in dynamic economies that can effectively trade-off economic equality and productivity. We propose a two-level deep reinforcement learning approach to learn dynamic tax policies, based on economic simulations in which both agents and a government learn and adapt. Our data-driven approach does not make use of economic modeling assumptions, and learns from observational data alone. We make four main contributions. First, we present an economic simulation environment that features competitive pressures and market dynamics. We validate the simulation by showing that baseline tax systems perform in a way that is consistent with economic theory, including in regard to learned agent behaviors and specializations. Second, we show that AI-driven tax policies improve the trade-off between equality and productivity by 16% over baseline policies, including the prominent Saez tax framework. Third, we showcase several emergent features: AI-driven tax policies are qualitatively different from baselines, setting a higher top tax rate and higher net subsidies for low incomes. Moreover, AI-driven tax policies perform strongly in the face of emergent tax-gaming strategies learned by AI agents. Lastly, AI-driven tax policies are also effective when used in experiments with human participants. In experiments conducted on MTurk, an AI tax policy provides an equality-productivity trade-off that is similar to that provided by the Saez framework along with higher inverse-income weighted social welfare.",True,True,Stephan Zheng and Alexander Trott and Sunil Srinivasa and Nikhil Naik and Melvin Gruesbeck and David C. Parkes and Richard Socher,2020.0,,https://arxiv.org/abs/2004.13332,,,"The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies",[PDF] Improving Equality and Productivity with AI-Driven Tax Policies - arXiv,http://arxiv.org/pdf/2004.13332,"The AI Economist uses AI to discover tax policies that improve the trade-off between equality and productivity, achieving a 16% improvement" TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,NBERc14009,\cite{NBERc14009},The Impact of Machine Learning on Economics,,,True,False,Susan Athey,2018.0,January,http://www.nber.org/chapters/c14009,,,The Impact of Machine Learning on Economics,The Impact of Machine Learning on Economics,https://www.gsb.stanford.edu/faculty-research/publications/impact-machine-learning-economics,"# The Impact of Machine Learning on Economics This paper provides an assessment of the early contributions of machine learning to economics, as well as predictions about its future contributions. It begins by briefly overviewing some themes from the literature on machine learning, and then draws some contrasts with traditional approaches to estimating the impact of counterfactual policies in economics. Next, we review some of the initial “off-the-shelf” applications of machine learning to economics, including applications in analyzing text and images. Finally, we overview a set of broader predictions about the future impact of machine learning on economics, including its impacts on the nature of collaboration, funding, research tools, and research questions. ## Footer contact links ## Footer 1 ## Footer 2 ## Footer legal links" TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,AxtellFarmer2022,\cite{AxtellFarmer2022},"Agent Based Modeling in Economics and Finance: Past, Present, and Future",,,True,False,"Axtell, R. and Farmer, J.",2022.0,,,,Journal of Economic Literature,"Agent Based Modeling in Economics and Finance: Past, Present, and Future","[PDF] Agent-Based Modeling in Economics and Finance: Past, Present ...",https://complexityhandbook.uni-hohenheim.de/fileadmin/einrichtungen/complexityhandbook/AXTELL_Robert.pdf,Abstract. Agent-based modeling is a novel computational methodology for representing the behavior of individuals in order to study social phenomena. TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,DelliGatti2018,\cite{DelliGatti2018},Contents,,,True,False,"Delli Gatti, Domenico and Fagiolo, Giorgio and Gallegati, Mauro and Richiardi, Matteo and Russo, Alberto",2018.0,,,,,Contents,CONTENTS | definition in the Cambridge English Dictionary,https://dictionary.cambridge.org/us/dictionary/english/contents,everything that is contained within something: contents of The contents of his bag spilled all over the floor. He didn't need to open the box because TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,shen2025phyxdoesmodelwits,\cite{shen2025phyxdoesmodelwits},"PhyX: Does Your Model Have the ""Wits"" for Physical Reasoning?",http://arxiv.org/abs/2505.15929v2,"Existing benchmarks fail to capture a crucial aspect of intelligence: physical reasoning, the integrated ability to combine domain knowledge, symbolic reasoning, and understanding of real-world constraints. To address this gap, we introduce PhyX: the first large-scale benchmark designed to assess models capacity for physics-grounded reasoning in visual scenarios. PhyX includes 3K meticulously curated multimodal questions spanning 6 reasoning types across 25 sub-domains and 6 core physics domains: thermodynamics, electromagnetism, mechanics, modern physics, optics, and wave\&acoustics. In our comprehensive evaluation, even state-of-the-art models struggle significantly with physical reasoning. GPT-4o, Claude3.7-Sonnet, and GPT-o4-mini achieve only 32.5%, 42.2%, and 45.8% accuracy respectively-performance gaps exceeding 29% compared to human experts. Our analysis exposes critical limitations in current models: over-reliance on memorized disciplinary knowledge, excessive dependence on mathematical formulations, and surface-level visual pattern matching rather than genuine physical understanding. We provide in-depth analysis through fine-grained statistics, detailed case studies, and multiple evaluation paradigms to thoroughly examine physical reasoning capabilities. To ensure reproducibility, we implement a compatible evaluation protocol based on widely-used toolkits such as VLMEvalKit, enabling one-click evaluation. More details are available on our project page: https://phyx-bench.github.io/.",True,True,Hui Shen and Taiqiang Wu and Qi Han and Yunta Hsieh and Jizhou Wang and Yuyue Zhang and Yuxin Cheng and Zijian Hao and Yuansheng Ni and Xin Wang and Zhongwei Wan and Kai Zhang and Wendong Xu and Jing Xiong and Ping Luo and Wenhu Chen and Chaofan Tao and Zhuoqing Mao and Ngai Wong,2025.0,,https://arxiv.org/abs/2505.15929,,,"PhyX: Does Your Model Have the ""Wits"" for Physical Reasoning?","PhyX: Does Your Model Have the ""Wits"" for Physical Reasoning?",http://arxiv.org/pdf/2505.15929v2,"Existing benchmarks fail to capture a crucial aspect of intelligence: physical reasoning, the integrated ability to combine domain knowledge, symbolic reasoning, and understanding of real-world constraints. To address this gap, we introduce PhyX: the first large-scale benchmark designed to assess models capacity for physics-grounded reasoning in visual scenarios. PhyX includes 3K meticulously curated multimodal questions spanning 6 reasoning types across 25 sub-domains and 6 core physics domains: thermodynamics, electromagnetism, mechanics, modern physics, optics, and wave\&acoustics. In our comprehensive evaluation, even state-of-the-art models struggle significantly with physical reasoning. GPT-4o, Claude3.7-Sonnet, and GPT-o4-mini achieve only 32.5%, 42.2%, and 45.8% accuracy respectively-performance gaps exceeding 29% compared to human experts. Our analysis exposes critical limitations in current models: over-reliance on memorized disciplinary knowledge, excessive dependence on mathematical formulations, and surface-level visual pattern matching rather than genuine physical understanding. We provide in-depth analysis through fine-grained statistics, detailed case studies, and multiple evaluation paradigms to thoroughly examine physical reasoning capabilities. To ensure reproducibility, we implement a compatible evaluation protocol based on widely-used toolkits such as VLMEvalKit, enabling one-click evaluation. More details are available on our project page: https://phyx-bench.github.io/." TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,zhao2024competeaiunderstandingcompetitiondynamics,\cite{zhao2024competeaiunderstandingcompetitiondynamics},"CompeteAI: Understanding the Competition Dynamics in Large Language Model-based Agents",http://arxiv.org/abs/2310.17512v2,"Large language models (LLMs) have been widely used as agents to complete different tasks, such as personal assistance or event planning. While most of the work has focused on cooperation and collaboration between agents, little work explores competition, another important mechanism that promotes the development of society and economy. In this paper, we seek to examine the competition dynamics in LLM-based agents. We first propose a general framework for studying the competition between agents. Then, we implement a practical competitive environment using GPT-4 to simulate a virtual town with two types of agents, restaurant agents and customer agents. Specifically, the restaurant agents compete with each other to attract more customers, where competition encourages them to transform, such as cultivating new operating strategies. Simulation experiments reveal several interesting findings at the micro and macro levels, which align well with existing market and sociological theories. We hope that the framework and environment can be a promising testbed to study competition that fosters understanding of society. Code is available at: https://github.com/microsoft/competeai.",True,True,Qinlin Zhao and Jindong Wang and Yixuan Zhang and Yiqiao Jin and Kaijie Zhu and Hao Chen and Xing Xie,2024.0,,https://arxiv.org/abs/2310.17512,,,"CompeteAI: Understanding the Competition Dynamics in Large Language Model-based Agents",CompeteAI: Understanding the Competition Dynamics in Large ...,https://arxiv.org/abs/2310.17512,"In this paper, we seek to examine the competition dynamics in LLM-based agents. We first propose a general framework for studying the competition between" TaxAgent: How Large Language Model Designs Fiscal Policy,2506.02838v1,nie2024surveylargelanguagemodels,\cite{nie2024surveylargelanguagemodels},"A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges",,,True,False,Yuqi Nie and Yaxuan Kong and Xiaowen Dong and John M. Mulvey and H. Vincent Poor and Qingsong Wen and Stefan Zohren,2024.0,,https://arxiv.org/abs/2406.11903,,,"A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges","A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges",http://arxiv.org/pdf/2406.11903v1,"Recent advances in large language models (LLMs) have unlocked novel opportunities for machine learning applications in the financial domain. These models have demonstrated remarkable capabilities in understanding context, processing vast amounts of data, and generating human-preferred contents. In this survey, we explore the application of LLMs on various financial tasks, focusing on their potential to transform traditional practices and drive innovation. We provide a discussion of the progress and advantages of LLMs in financial contexts, analyzing their advanced technologies as well as prospective capabilities in contextual understanding, transfer learning flexibility, complex emotion detection, etc. We then highlight this survey for categorizing the existing literature into key application areas, including linguistic tasks, sentiment analysis, financial time series, financial reasoning, agent-based modeling, and other applications. For each application area, we delve into specific methodologies, such as textual analysis, knowledge-based analysis, forecasting, data augmentation, planning, decision support, and simulations. Furthermore, a comprehensive collection of datasets, model assets, and useful codes associated with mainstream applications are presented as resources for the researchers and practitioners. Finally, we outline the challenges and opportunities for future research, particularly emphasizing a number of distinctive aspects in this field. We hope our work can help facilitate the adoption and further development of LLMs in the financial sector." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,vllm,\cite{vllm},"Efficient Memory Management for Large Language Model Serving with PagedAttention",http://arxiv.org/abs/2309.06180v1,"High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. However, existing systems struggle because the key-value cache (KV cache) memory for each request is huge and grows and shrinks dynamically. When managed inefficiently, this memory can be significantly wasted by fragmentation and redundant duplication, limiting the batch size. To address this problem, we propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce memory usage. Our evaluations show that vLLM improves the throughput of popular LLMs by 2-4$\times$ with the same level of latency compared to the state-of-the-art systems, such as FasterTransformer and Orca. The improvement is more pronounced with longer sequences, larger models, and more complex decoding algorithms. vLLM's source code is publicly available at https://github.com/vllm-project/vllm",True,True,"Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph Gonzalez and Hao Zhang and Ion Stoica",2023.0,,https://doi.org/10.1145/3600006.3613165,10.1145/3600006.3613165,,"Efficient Memory Management for Large Language Model Serving with PagedAttention",Efficient Memory Management for Large Language Model ...,https://arxiv.org/pdf/2309.06180,"Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon 1,∗ Zhuohan Li 1,∗ Siyuan Zhuang 1 Ying Sheng 1,2 Lianmin Zheng 1 Cody Hao Yu 3 Joseph E. Gonzalez 1 Hao Zhang 4 Ion Stoica 1 1 UC Berkeley 2Stanford University 3Independent Researcher 4UC San Diego Abstract High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. To address this problem, we propose PagedAttention, an attention al-gorithm inspired by the classical virtual memory and pag-ing techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce mem-ory usage. To address the above limitations, we propose PagedAt-tention , an attention algorithm inspired by the operating system’s (OS) solution to memory fragmentation and shar-ing: virtual memory with paging . In this work, we build vLLM , a high-throughput distributed LLM serving engine on top of PagedAttention that achieves near-zero waste in KV cache memory." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,chunkattention,\cite{chunkattention},"ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition",http://arxiv.org/abs/2402.15220v4,"Self-attention is an essential component of large language models (LLM) but a significant source of inference latency for long sequences. In multi-tenant LLM serving scenarios, the compute and memory operation cost of self-attention can be optimized by using the probability that multiple LLM requests have shared system prompts in prefixes. In this paper, we introduce ChunkAttention, a prefix-aware self-attention module that can detect matching prompt prefixes across multiple requests and share their key/value tensors in memory at runtime to improve the memory utilization of KV cache. This is achieved by breaking monolithic key/value tensors into smaller chunks and structuring them into the auxiliary prefix tree. Consequently, on top of the prefix-tree based KV cache, we design an efficient self-attention kernel, where a two-phase partition algorithm is implemented to improve the data locality during self-attention computation in the presence of shared system prompts. Experiments show that ChunkAttention can speed up the self-attention kernel by 3.2-4.8$\times$ compared to the state-of-the-art implementation, with the length of the system prompt ranging from 1024 to 4096.",True,True,Lu Ye and Ze Tao and Yong Huang and Yang Li,2024.0,,https://aclanthology.org/2024.acl-long.623,,,"ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition",[PDF] Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase ...,https://aclanthology.org/2024.acl-long.623.pdf,ChunkAttention is a prefix-aware self-attention module that uses a prefix-aware KV cache and two-phase partition to improve memory utilization "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,cachedattention,\cite{cachedattention},"Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention",,,True,False,"Bin Gao and Zhuomin He and Puru Sharma and Qingxuan Kang and Djordje Jevdjic and Junbo Deng and Xingkun Yang and Zhou Yu and Pengfei Zuo",2024.0,,https://www.usenix.org/conference/atc24/presentation/gao-bin-cost,,,"Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention",Cost-Efficient Large Language Model Serving for Multi-turn ... - arXiv,https://arxiv.org/abs/2403.19708,"This paper proposes CachedAttention, a new attention mechanism that enables reuse of KV caches across multi-turn conversations, significantly reducing the" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,promptcache,\cite{promptcache},Prompt Cache: Modular Attention Reuse for Low-Latency Inference,http://arxiv.org/abs/2311.04934v2,"We present Prompt Cache, an approach for accelerating inference for large language models (LLM) by reusing attention states across different LLM prompts. Many input prompts have overlapping text segments, such as system messages, prompt templates, and documents provided for context. Our key insight is that by precomputing and storing the attention states of these frequently occurring text segments on the inference server, we can efficiently reuse them when these segments appear in user prompts. Prompt Cache employs a schema to explicitly define such reusable text segments, called prompt modules. The schema ensures positional accuracy during attention state reuse and provides users with an interface to access cached states in their prompt. Using a prototype implementation, we evaluate Prompt Cache across several LLMs. We show that Prompt Cache significantly reduce latency in time-to-first-token, especially for longer prompts such as document-based question answering and recommendations. The improvements range from 8x for GPU-based inference to 60x for CPU-based inference, all while maintaining output accuracy and without the need for model parameter modifications.",True,True,"In Gim and Guojun Chen and Seung{-}Seob Lee and Nikhil Sarda and Anurag Khandelwal and Lin Zhong",2024.0,,https://proceedings.mlsys.org/paper\_files/paper/2024/hash/a66caa1703fe34705a4368c3014c1966-Abstract-Conference.html,,,Prompt Cache: Modular Attention Reuse for Low-Latency Inference,[PDF] Prompt Cache: Modular Attention Reuse for Low-Latency Inference,https://proceedings.mlsys.org/paper_files/paper/2024/file/a66caa1703fe34705a4368c3014c1966-Paper-Conference.pdf,"Prompt Cache accelerates LLM inference by reusing attention states of frequently occurring text segments, precomputed and stored in memory." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,sglang,\cite{sglang},Efficiently Programming Large Language Models using SGLang,,,True,False,"Lianmin Zheng and Liangsheng Yin and Zhiqiang Xie and Jeff Huang and Chuyue Sun and Cody Hao Yu and Shiyi Cao and Christos Kozyrakis and Ion Stoica and Joseph E. Gonzalez and Clark W. Barrett and Ying Sheng",2023.0,,https://doi.org/10.48550/arXiv.2312.07104,10.48550/ARXIV.2312.07104,CoRR,Efficiently Programming Large Language Models using SGLang,Efficiently Programming Large Language Models using SGLang,https://arxiv.org/html/2312.07104v1,SGLang simplifies the writing of LLM programs and boosts execution efficiency. Our experiments demonstrate that SGLang can speed up common LLM tasks by up to 5 "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,cacheblend,\cite{cacheblend},"CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion",http://arxiv.org/abs/2405.16444v3,"Large language models (LLMs) often incorporate multiple text chunks in their inputs to provide the necessary contexts. To speed up the prefill of the long LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache when the context is reused as the prefix of another LLM input. However, the reused text chunks are not always the input prefix, which makes precomputed KV caches not directly usable since they ignore the text's cross-attention with the preceding texts. Thus, the benefits of reusing KV caches remain largely unrealized. This paper tackles just one challenge: when an LLM input contains multiple text chunks, how to quickly combine their precomputed KV caches in order to achieve the same generation quality as the expensive full prefill (i.e., without reusing KV cache)? This challenge naturally arises in retrieval-augmented generation (RAG) where the input is supplemented with multiple retrieved texts as the context. We present CacheBlend, a scheme that reuses the precomputed KV caches, regardless prefix or not, and selectively recomputes the KV values of a small subset of tokens to partially update each reused KV cache. In the meantime, the small extra delay for recomputing some tokens can be pipelined with the retrieval of KV caches within the same job, allowing CacheBlend to store KV caches in slower devices with more storage capacity while retrieving them without increasing the inference delay. By comparing CacheBlend with the state-of-the-art KV cache reusing schemes on three open-source LLMs of various sizes and four popular benchmark datasets of different tasks, we show that CacheBlend reduces time-to-first-token (TTFT) by 2.2-3.3x and increases the inference throughput by 2.8-5x from full KV recompute without compromising generation quality. The code is available at https://github.com/LMCache/LMCache.",True,True,"Jiayi Yao and Hanchen Li and Yuhan Liu and Siddhant Ray and Yihua Cheng and Qizheng Zhang and Kuntai Du and Shan Lu and Junchen Jiang",2024.0,,https://doi.org/10.48550/arXiv.2405.16444,10.48550/ARXIV.2405.16444,CoRR,"CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion",CacheBlend: Fast Large Language Model Serving for RAG ... - arXiv,https://arxiv.org/abs/2405.16444,"Image 4: arxiv logo>cs> arXiv:2405.16444 View a PDF of the paper titled CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion, by Jiayi Yao and 8 other authors View a PDF of the paper titled CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion, by Jiayi Yao and 8 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle " "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,openaiapi,\cite{openaiapi},OpenAI developer platform,,,True,False,OpenAI,,,,,,OpenAI developer platform,"Introducing Verdi, an AI dev platform powered by GPT-4o - OpenAI",https://openai.com/index/mercado-libre/,"Verdi, a development platform layer using GPT-4o, GPT-4o mini, and GPT-3.5 Turbo, which is transforming how Mercado Libre handles customer service and other" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,genimiapi,\cite{genimiapi},Gemini API,,,True,False,Google,2025.0,,,,,Gemini API,Gemini Developer API | Gemma open models | Google AI for ...,https://ai.google.dev/,"Gemini Developer API | Gemma open models  |  Google AI for Developers - Gemini Showcase - Gemini Showcase ### Integrate Google AI models with an API key Build with cutting-edge AI models, like Gemini, Imagen, and Veo, from Google DeepMind Integrate Google AI models with an API key Unlock AI capabilities for your apps with a simple call to the Gemini API. Integrate AI models like Gemini Nano into web apps with Chrome's built-in web platform APIs. Build trusted and secure AI with guidance for responsible design, development, and deployment of models and applications. See how the Ruby-based AI agent framework empowers developer teams to be more productive with the power of Gemini models." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,claudeapi,\cite{claudeapi},Claude API,,,True,False,Anthropic,2025.0,,,,,Claude API,Anthropic API,https://docs.anthropic.com/en/home,"Home - Anthropic Claude Documentation Learn how to get started with the Anthropic API, the Console, and Claude Code. Explore the advanced features and capabilities now available in Claude.## API reference Integrate and scale using our API and SDKs.## Anthropic Console Learn about changes and new features in Claude and the API.## Upgrade to Claude 4 Upgrade to the latest model to access new tools and features available in Claude 4. ## Claude Code ## Claude Code quickstart Get started with Claude Code.## Claude Code reference Consult the Claude Code reference documentation for details on feature implementation and configuration.## Claude Code release notes Learn about changes and new features in Claude Code. See replicable code samples and implementations.## Anthropic Quickstarts" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,mooncake,\cite{mooncake},Mooncake Trace,,,True,False,,2025.0,,,,,Mooncake Trace,kvcache-ai/Mooncake - GitHub,https://github.com/kvcache-ai/Mooncake,Moonshot AI. Now both the Transfer Engine and Mooncake Store are open-sourced! This repository also hosts its technical report and the open sourced traces. "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,hu2024epic,\cite{hu2024epic},EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models,,,True,False,Junhao Hu and Wenrui Huang and Haoyi Wang and Weidong Wang and Tiancheng Hu and Qin Zhang and Hao Feng and Xusheng Chen and Yizhou Shan and Tao Xie,2024.0,,https://arxiv.org/abs/2410.15332,,,EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models,EPIC: Efficient Position-Independent Caching for Serving Large...,https://openreview.net/forum?id=qjd3ZUiHRT&referrer=%5Bthe%20profile%20of%20Yizhou%20Shan%5D(%2Fprofile%3Fid%3D~Yizhou_Shan2),"Summary: This paper introduces PICI, an efficient position-independent context caching system for serving large language models. The system pre-computes the KV" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,streamingllm,\cite{streamingllm},Efficient Streaming Language Models with Attention Sinks,http://arxiv.org/abs/2309.17453v4,"Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a ""sink"" even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided at https://github.com/mit-han-lab/streaming-llm.",True,True,"Guangxuan Xiao and Yuandong Tian and Beidi Chen and Song Han and Mike Lewis",2024.0,,https://openreview.net/forum?id=NG7sS51zVF,,,Efficient Streaming Language Models with Attention Sinks,Efficient Streaming Language Models with Attention Sinks,http://arxiv.org/pdf/2309.17453v4,"Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a ""sink"" even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided at https://github.com/mit-han-lab/streaming-llm." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,h2o,\cite{h2o},"{H2O:} Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models",,,True,False,"Zhenyu Zhang and Ying Sheng and Tianyi Zhou and Tianlong Chen and Lianmin Zheng and Ruisi Cai and Zhao Song and Yuandong Tian and Christopher R{\'{e}} and Clark W. Barrett and Zhangyang Wang and Beidi Chen",2023.0,,http://papers.nips.cc/paper\_files/paper/2023/hash/6ceefa7b15572587b78ecfcebb2827f8-Abstract-Conference.html,,,"{H2O:} Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models",Hogwild! Inference: Parallel LLM Generation via Concurrent Attention,https://arxiv.org/html/2504.06261v1,"H2o: Heavy-hitter oracle for efficient generative inference of large language models. Advances in Neural Information Processing Systems, 36" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,infinigen,\cite{infinigen},"InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management",http://arxiv.org/abs/2406.19707v1,"Transformer-based large language models (LLMs) demonstrate impressive performance across various natural language processing tasks. Serving LLM inference for generating long contents, however, poses a challenge due to the enormous memory footprint of the transient state, known as the key-value (KV) cache, which scales with the sequence length and batch size. In this paper, we present InfiniGen, a novel KV cache management framework tailored for long-text generation, which synergistically works with modern offloading-based inference systems. InfiniGen leverages the key insight that a few important tokens that are essential for computing the subsequent attention layer in the Transformer can be speculated by performing a minimal rehearsal with the inputs of the current layer and part of the query weight and key cache of the subsequent layer. This allows us to prefetch only the essential KV cache entries (without fetching them all), thereby mitigating the fetch overhead from the host memory in offloading-based LLM serving systems. Our evaluation on several representative LLMs shows that InfiniGen improves the overall performance of a modern offloading-based system by up to 3.00x compared to prior KV cache management methods while offering substantially better model accuracy.",True,True,"Wonbeom Lee and Jungi Lee and Junghwan Seo and Jaewoong Sim",2024.0,,https://www.usenix.org/conference/osdi24/presentation/lee,,,"InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management",InfiniGen: Efficient Generative Inference of Large Language Models ...,https://arxiv.org/abs/2406.19707,"In this paper, we present InfiniGen, a novel KV cache management framework tailored for long-text generation, which synergistically works with modern" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,pyramidkv,\cite{pyramidkv},"PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling",http://arxiv.org/abs/2406.02069v4,"In this study, we investigate whether attention-based information flow inside large language models (LLMs) is aggregated through noticeable patterns for long context processing. Our observations reveal that LLMs aggregate information through Pyramidal Information Funneling where attention is scattering widely in lower layers, progressively consolidating within specific contexts, and ultimately focusing on critical tokens (a.k.a massive activation or attention sink) in higher layers. Motivated by these insights, we developed PyramidKV, a novel and effective KV cache compression method. This approach dynamically adjusts the KV cache size across different layers, allocating more cache in lower layers and less in higher ones, diverging from traditional methods that maintain a uniform KV cache size. Our experimental evaluations, utilizing the LongBench benchmark, show that PyramidKV matches the performance of models with a full KV cache while retaining only 12% of the KV cache, thus significantly reducing memory usage. In scenarios emphasizing memory efficiency, where only 0.7% of the KV cache is maintained, PyramidKV surpasses other KV cache compression techniques, achieving up to a 20.5 absolute accuracy improvement on TREC dataset. In the Needle-in-a-Haystack experiment, PyramidKV outperforms competing methods in maintaining long-context comprehension in LLMs; notably, retaining just 128 KV cache entries enables the LLAMA-3-70B model to achieve 100.0 Acc. performance.",True,True,"Zefan Cai and Yichi Zhang and Bofei Gao and Yuliang Liu and Tianyu Liu and Keming Lu and Wayne Xiong and Yue Dong and Baobao Chang and Junjie Hu and Wen Xiao",2024.0,,https://doi.org/10.48550/arXiv.2406.02069,10.48550/ARXIV.2406.02069,CoRR,"PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling",PyramidKV: Dynamic KV Cache Compression based on Pyramidal...,https://openreview.net/forum?id=jZVNmDiU86,"We developed PyramidKV, a novel and effective KV cache compression method. This approach dynamically adjusts the KV cache size across different layers." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,KVQuant,\cite{KVQuant},"KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization",http://arxiv.org/abs/2401.18079v6,"LLMs are seeing growing use for applications which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in sub-4-bit precision. Our work, KVQuant, facilitates low precision KV cache quantization by incorporating several novel methods: (i) Per-Channel Key Quantization, where we adjust the dimension along which we quantize the Key activations to better match the distribution; (ii) Pre-RoPE Key Quantization, where we quantize Key activations before the rotary positional embedding to mitigate its impact on quantization; (iii) Non-Uniform KV Cache Quantization, where we derive per-layer sensitivity-weighted non-uniform datatypes that better represent the distributions; and (iv) Per-Vector Dense-and-Sparse Quantization, where we isolate outliers separately for each vector to minimize skews in quantization ranges. By applying our method to the LLaMA, Llama-2, Llama-3, and Mistral models, we achieve < 0.1 perplexity degradation with 3-bit quantization on both Wikitext-2 and C4, outperforming existing approaches. Our method enables serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB GPU and up to 10 million on an 8-GPU system. We develop custom CUDA kernels for KVQuant, showing that we can achieve up to ~1.7x speedups, compared to baseline fp16 matrix-vector multiplications, for the LLaMA-7B model.",True,True,"Coleman Hooper and Sehoon Kim and Hiva Mohammadzadeh and Michael W. Mahoney and Yakun Sophia Shao and Kurt Keutzer and Amir Gholami",2024.0,,http://papers.nips.cc/paper\_files/paper/2024/hash/028fcbcf85435d39a40c4d61b42c99a4-Abstract-Conference.html,,,"KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization",KVQuant: Towards 10 Million Context Length LLM Inference with KV ...,https://github.com/SqueezeAILab/KVQuant,"GitHub - SqueezeAILab/KVQuant: [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization [Paper] KVQuant is a methodology for efficient KV cache quantization that incorporates several innovations to acheive accurate low-precision quantization, thereby enabling efficient long context length inference. TLDR: KVQuant addresses the memory bottleneck with long context length inference by quantizing the KV cache to low precision. title={KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization}, [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,lruk,\cite{lruk},The {LRU-K} Page Replacement Algorithm For Database Disk Buffering,,,True,False,"Elizabeth J. O'Neil and Patrick E. O'Neil and Gerhard Weikum",1993.0,,https://doi.org/10.1145/170035.170081,10.1145/170035.170081,,The {LRU-K} Page Replacement Algorithm For Database Disk Buffering,[PDF] The LRU-K Page Replacement Algorithm For Database Disk Buffering,https://www.cs.cmu.edu/~natassa/courses/15-721/papers/p297-o_neil.pdf,"The basic idea of. LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statis- tieall y estimate" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,slru,\cite{slru},Caching Strategies to Improve Disk System Performance,,,True,False,"Ramakrishna Karedla and J. Spencer Love and Bradley G. Wherry",1994.0,,https://doi.org/10.1109/2.268884,10.1109/2.268884,Computer,Caching Strategies to Improve Disk System Performance,Caching strategies to improve disk system performance - IEEE Xplore,http://ieeexplore.ieee.org/document/268884/,"In this article, we examine the use of caching as a means to increase system response time and improve the data throughput of the disk subsystem." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,twoq,\cite{twoq},"2Q: {A} Low Overhead High Performance Buffer Management Replacement Algorithm",,,True,False,"Theodore Johnson and Dennis E. Shasha",1994.0,,http://www.vldb.org/conf/1994/P439.PDF,,,"2Q: {A} Low Overhead High Performance Buffer Management Replacement Algorithm",2Q: A Low Overhead High Performance Buffer Management ...,https://dl.acm.org/doi/10.5555/645920.672996,2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm. Authors: Theodore Johnson. "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,eelru,\cite{eelru},{EELRU:} Simple and Effective Adaptive Page Replacement,,,True,False,"Yannis Smaragdakis and Scott F. Kaplan and Paul R. Wilson",1999.0,,https://doi.org/10.1145/301453.301486,10.1145/301453.301486,,{EELRU:} Simple and Effective Adaptive Page Replacement,EELRU: Simple and Effective Adaptive Page Replacement,https://www.researchgate.net/publication/2822757_EELRU_Simple_and_Effective_Adaptive_Page_Replacement,"EELRU is a simple adaptive replacement algorithm, which uses only the kind of information needed by LRU---how recently each page has been touched relative to" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,lrfu,\cite{lrfu},"{LRFU:} {A} Spectrum of Policies that Subsumes the Least Recently Used and Least Frequently Used Policies",,,True,False,"Donghee Lee and Jongmoo Choi and Jong{-}Hun Kim and Sam H. Noh and Sang Lyul Min and Yookun Cho and Chong{-}Sang Kim",2001.0,,https://doi.org/10.1109/TC.2001.970573,10.1109/TC.2001.970573,{IEEE} Trans. Computers,"{LRFU:} {A} Spectrum of Policies that Subsumes the Least Recently Used and Least Frequently Used Policies",[PDF] LRFU: a spectrum of policies that subsumes the least recently used ...,https://www.openu.ac.il/home/wiseman/2os/lru/lrfu.pdf,"Of these, the Least Recently Used (LRU) and the. Least Frequently Used (LFU) block replacement policies constitute the two main streams. The LRU policy and its." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,lirs,\cite{lirs},"{LIRS:} an efficient low inter-reference recency set replacement policy to improve buffer cache performance",,,True,False,"Song Jiang and Xiaodong Zhang",2002.0,,https://doi.org/10.1145/511334.511340,10.1145/511334.511340,,"{LIRS:} an efficient low inter-reference recency set replacement policy to improve buffer cache performance",LIRS: an efficient low inter-reference recency set replacement policy ...,https://www.researchgate.net/publication/367088056_LIRS_an_efficient_low_inter-reference_recency_set_replacement_policy_to_improve_buffer_cache_performance,"Many studies are focused on cache replacement algorithms, such as FIFO, LRU, LFU, and some advanced cache algorithms like ARC [19], LIRS [15] and 2Q [16]." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,arc,\cite{arc},"{ARC:} {A} Self-Tuning, Low Overhead Replacement Cache",,,True,False,"Nimrod Megiddo and Dharmendra S. Modha",2003.0,,http://www.usenix.org/events/fast03/tech/megiddo.html,,,"{ARC:} {A} Self-Tuning, Low Overhead Replacement Cache","[PDF] ARC: A Self-Tuning, Low Overhead Replacement Cache",https://www.cs.cmu.edu/~natassa/courses/15-721/papers/arcfast.pdf,"We propose a new cache management policy, namely, Adaptive. Replacement Cache (ARC), that has several advantages. In response to evolving and changing access" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,mq,\cite{mq},Second-Level Buffer Cache Management,,,True,False,"Yuanyuan Zhou and Zhifeng Chen and Kai Li",2004.0,,https://doi.org/10.1109/TPDS.2004.13,10.1109/TPDS.2004.13,{IEEE} Trans. Parallel Distributed Syst.,Second-Level Buffer Cache Management,[PDF] Second-Level Buffer Cache Management,https://www.openu.ac.il/home/wiseman/2os/lru/mq.pdf,This is a local cache replacement algorithm because it manages an L2 buffer cache without any information from first-level. "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,car,\cite{car},{CAR:} Clock with Adaptive Replacement,,,True,False,"Sorav Bansal and Dharmendra S. Modha",2004.0,,http://www.usenix.org/events/fast04/tech/bansal.html,,,{CAR:} Clock with Adaptive Replacement,CAR: Clock with Adaptive Replacement - Stanford CS Theory,http://theory.stanford.edu/~sbansal/pubs/fast04.pdf,"by S Bansal · Cited by 412 — CAR is a new algorithm that improves upon CLOCK by being scan-resistant, self-tuning, and adaptively capturing recency and frequency features." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,clockpro,\cite{clockpro},CLOCK-Pro: An Effective Improvement of the {CLOCK} Replacement,,,True,False,"Song Jiang and Feng Chen and Xiaodong Zhang",2005.0,,http://www.usenix.org/events/usenix05/tech/general/jiang.html,,,CLOCK-Pro: An Effective Improvement of the {CLOCK} Replacement,CLOCK-Pro: An Effective Improvement of the CLOCK Replacement,https://www.usenix.org/conference/2005-usenix-annual-technical-conference/clock-pro-effective-improvement-clock-replacement,"We propose an improved CLOCK replacement policy, called CLOCK-Pro. By additionally keeping track of a limited number of replaced pages, CLOCK-Pro works in a" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,DBLP:journals/tos/EinzigerEFM22,\cite{DBLP:journals/tos/EinzigerEFM22},Lightweight Robust Size Aware Cache Management,http://arxiv.org/abs/2105.08770v2,"Modern key-value stores, object stores, Internet proxy caches, as well as Content Delivery Networks (CDN) often manage objects of diverse sizes, e.g., blobs, video files of different lengths, images with varying resolution, and small documents. In such workloads, size-aware cache policies outperform size-oblivious algorithms. Unfortunately, existing size-aware algorithms tend to be overly complicated and computationally~expensive. Our work follows a more approachable pattern; we extend the prevalent (size-oblivious) TinyLFU cache admission policy to handle variable sized items. Implementing our approach inside two popular caching libraries only requires minor changes. We show that our algorithms yield competitive or better hit-ratios and byte hit-ratios compared to the state of the art size-aware algorithms such as AdaptSize, LHD, LRB, and GDSF. Further, a runtime comparison indicates that our implementation is faster by up to x3 compared to the best alternative, i.e., it imposes much lower CPU overhead.",True,True,"Gil Einziger and Ohad Eytan and Roy Friedman and Benjamin Manes",2022.0,,https://doi.org/10.1145/3507920,10.1145/3507920,{ACM} Trans. Storage,Lightweight Robust Size Aware Cache Management,Lightweight Robust Size Aware Cache Management,http://arxiv.org/pdf/2105.08770v2,"Modern key-value stores, object stores, Internet proxy caches, as well as Content Delivery Networks (CDN) often manage objects of diverse sizes, e.g., blobs, video files of different lengths, images with varying resolution, and small documents. In such workloads, size-aware cache policies outperform size-oblivious algorithms. Unfortunately, existing size-aware algorithms tend to be overly complicated and computationally~expensive. Our work follows a more approachable pattern; we extend the prevalent (size-oblivious) TinyLFU cache admission policy to handle variable sized items. Implementing our approach inside two popular caching libraries only requires minor changes. We show that our algorithms yield competitive or better hit-ratios and byte hit-ratios compared to the state of the art size-aware algorithms such as AdaptSize, LHD, LRB, and GDSF. Further, a runtime comparison indicates that our implementation is faster by up to x3 compared to the best alternative, i.e., it imposes much lower CPU overhead." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,lhd,\cite{lhd},{LHD:} Improving Cache Hit Rate by Maximizing Hit Density,,,True,False,"Nathan Beckmann and Haoxian Chen and Asaf Cidon",2018.0,,https://www.usenix.org/conference/nsdi18/presentation/beckmann,,,{LHD:} Improving Cache Hit Rate by Maximizing Hit Density,LHD: improving cache hit rate by maximizing hit density,https://dl.acm.org/doi/10.5555/3307441.3307475,"We introduce least hit density (LHD), a novel eviction policy for key-value caches. LHD predicts each object's expected hits-per-space-consumed (hit density)." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,cacheus,\cite{cacheus},Learning Cache Replacement with {CACHEUS},,,True,False,"Liana V. Rodriguez and Farzana Beente Yusuf and Steven Lyons and Eysler Paz and Raju Rangaswami and Jason Liu and Ming Zhao and Giri Narasimhan",2021.0,,https://www.usenix.org/conference/fast21/presentation/rodriguez,,,Learning Cache Replacement with {CACHEUS},Learning Cache Replacement with Cacheus,https://www.usenix.org/system/files/fast21-rodriguez.pdf,"by LV Rodriguez · 2021 · Cited by 125 — Furthermore, CACHEUS enables augmenting state-of-the-art algorithms (e.g., LIRS, ARC) by combining it with a complementary cache replacement" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,sieve,\cite{sieve},"{SIEVE} is Simpler than {LRU:} an Efficient Turn-Key Eviction Algorithm for Web Caches",,,True,False,"Yazhuo Zhang and Juncheng Yang and Yao Yue and Ymir Vigfusson and K. V. Rashmi",2024.0,,https://www.usenix.org/conference/nsdi24/presentation/zhang-yazhuo,,,"{SIEVE} is Simpler than {LRU:} an Efficient Turn-Key Eviction Algorithm for Web Caches",SIEVE - An Efficient Turn-Key Eviction Algorithm for Web Caches,https://www.classcentral.com/course/youtube-nsdi-24-sieve-is-simpler-than-lru-an-efficient-turn-key-eviction-algorithm-for-web-caches-294624,"Discover how SIEVE outperforms traditional algorithms like LRU in simplicity, efficiency, and scalability for web cache workloads. Learn about the algorithm's" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,cherkasova1998improving,\cite{cherkasova1998improving},Improving WWW proxies performance with greedy-dual-size-frequency caching policy,,,True,False,"Cherkasova, Ludmila",1998.0,,,,,Improving WWW proxies performance with greedy-dual-size-frequency caching policy,Improving WWW proxies performance with Greedy-Dual- ...,https://www.researchgate.net/publication/228542715_Improving_WWW_proxies_performance_with_Greedy-Dual-Size-Frequency_caching_policy,This paper introduces the Greedy-Dual-Size-Frequency caching policy to maximize hit and byte hit rates for WWW proxies. Proposed caching strategy incorporates "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,yang2020twemcache,\cite{yang2020twemcache},A large scale analysis of hundreds of in-memory cache clusters at Twitter,,,True,False,Juncheng Yang and Yao Yue and K. V. Rashmi,2020.0,,https://www.usenix.org/conference/osdi20/presentation/yang,,,A large scale analysis of hundreds of in-memory cache clusters at Twitter,[PDF] A large scale analysis of hundreds of in-memory cache clusters at ...,https://www.usenix.org/system/files/osdi20-yang.pdf,"This paper is included in the Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation November 4–6, 2020 978-1-939133-19-9 Open access to the Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX A large scale analysis of hundreds of in-memory cache clusters at Twitter Juncheng Yang, Carnegie Mellon University; Yao Yue, Twitter; K. When memory is full, 192 14th USENIX Symposium on Operating Systems Design and Implementation USENIX Association #cluster request rate cache size cpu cores 0.00 0.25 0.50 0.75 1.00 Fraction of use case storage computation transient Figure 2: Resources consumed for the three cache use cases." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,berg2020cachelib,\cite{berg2020cachelib},The {CacheLib} Caching Engine: Design and Experiences at Scale,,,True,False,Benjamin Berg and Daniel S. Berger and Sara McAllister and Isaac Grosof and Sathya Gunasekar and Jimmy Lu and Michael Uhlar and Jim Carrig and Nathan Beckmann and Mor Harchol-Balter and Gregory R. Ganger,2020.0,,https://www.usenix.org/conference/osdi20/presentation/berg,,,The {CacheLib} Caching Engine: Design and Experiences at Scale,The CacheLib Caching Engine: Design and Experiences at Scale,https://www.usenix.org/conference/osdi20/presentation/berg,"CacheLib is a general-purpose caching engine, designed based on experiences with a range of caching use cases at Facebook, that facilitates the easy" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,icebreaker,\cite{icebreaker},IceBreaker: warming serverless functions better with heterogeneity,,,True,False,"Rohan Basu Roy and Tirthak Patel and Devesh Tiwari",2022.0,,https://doi.org/10.1145/3503222.3507750,10.1145/3503222.3507750,,IceBreaker: warming serverless functions better with heterogeneity,[PDF] IceBreaker: Warming Serverless Functions Better with Heterogeneity,http://www1.ece.neu.edu/~ningfang/SimPaper/icebreaker-ASPLOS22.pdf,IceBreaker is a novel function pre-warming and keep-alive scheme for serverless functions that exploit server-heterogeneity to lower the keep-alive cost and "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,fasscache,\cite{fasscache},FaasCache: keeping serverless computing alive with greedy-dual caching,,,True,False,"Alexander Fuerst and Prateek Sharma",2021.0,,https://doi.org/10.1145/3445814.3446757,10.1145/3445814.3446757,,FaasCache: keeping serverless computing alive with greedy-dual caching,[PDF] FaasCache: Keeping Serverless Computing Alive with Greedy-Dual ...,https://afuerst.github.io/assets/FaasCache.pdf,"Keep-alive policies must keep functions alive based on their resource and usage characteristics, which is challenging due to the diversity in FaaS workloads." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,DBLP:conf/osdi/ZhongLCHZL0024,\cite{DBLP:conf/osdi/ZhongLCHZL0024},"DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving",http://arxiv.org/abs/2401.09670v3,"DistServe improves the performance of large language models (LLMs) serving by disaggregating the prefill and decoding computation. Existing LLM serving systems colocate the two phases and batch the computation of prefill and decoding across all users and requests. We find that this strategy not only leads to strong prefill-decoding interferences but also couples the resource allocation and parallelism plans for both phases. LLM applications often emphasize individual latency for each phase: time to first token (TTFT) for the prefill phase and time per output token (TPOT) of each request for the decoding phase. In the presence of stringent latency requirements, existing systems have to prioritize one latency over the other, or over-provision compute resources to meet both. DistServe assigns prefill and decoding computation to different GPUs, hence eliminating prefill-decoding interferences. Given the application's TTFT and TPOT requirements, DistServe co-optimizes the resource allocation and parallelism strategy tailored for each phase. DistServe also places the two phases according to the serving cluster's bandwidth to minimize the communication caused by disaggregation. As a result, DistServe significantly improves LLM serving performance in terms of the maximum rate that can be served within both TTFT and TPOT constraints on each GPU. Our evaluations show that on various popular LLMs, applications, and latency requirements, DistServe can serve 7.4x more requests or 12.6x tighter SLO, compared to state-of-the-art systems, while staying within latency constraints for > 90% of requests.",True,True,"Yinmin Zhong and Shengyu Liu and Junda Chen and Jianbo Hu and Yibo Zhu and Xuanzhe Liu and Xin Jin and Hao Zhang",2024.0,,https://www.usenix.org/conference/osdi24/presentation/zhong-yinmin,,,"DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving",[PDF] DistServe: Disaggregating Prefill and Decoding for Goodput ...,https://www.usenix.org/system/files/osdi24-zhong-yinmin.pdf,"July 10–12, 2024 • Santa Clara, CA, USA 978-1-939133-40-3 Open access to the Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation is sponsored by DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving Yinmin Zhong and Shengyu Liu, Peking University; Junda Chen, UC San Diego; Jianbo Hu, Peking University; Yibo Zhu, StepFun; Xuanzhe Liu and Xin Jin, Peking University; Hao Zhang, UC San Diego DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving Yinmin Zhong1 Shengyu Liu1 Junda Chen3 Jianbo Hu1 Yibo Zhu2 Xuanzhe Liu1 Xin Jin1 Hao Zhang3 1School of Computer Science, Peking University 2StepFun 3UC San Diego Abstract DistServe improves the performance of large language mod-els (LLMs) serving by disaggregating the prefill and decoding computation." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,DBLP:journals/corr/abs-2404-09526,\cite{DBLP:journals/corr/abs-2404-09526},"LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism",http://arxiv.org/abs/2404.09526v2,"The context window of large language models (LLMs) is rapidly increasing, leading to a huge variance in resource usage between different requests as well as between different phases of the same request. Restricted by static parallelism strategies, existing LLM serving systems cannot efficiently utilize the underlying resources to serve variable-length requests in different phases. To address this problem, we propose a new parallelism paradigm, elastic sequence parallelism (ESP), to elastically adapt to the variance between different requests and phases. Based on ESP, we design and build LoongServe, an LLM serving system that (1) improves computation efficiency by elastically adjusting the degree of parallelism in real-time, (2) improves communication efficiency by reducing key-value cache migration overhead and overlapping partial decoding communication with computation, and (3) improves GPU memory efficiency by reducing key-value cache fragmentation across instances. Our evaluation under diverse real-world datasets shows that LoongServe improves the maximum throughput by up to 3.85$\times$ compared to the chunked prefill and 5.81$\times$ compared to the prefill-decoding disaggregation.",True,True,"Bingyang Wu and Shengyu Liu and Yinmin Zhong and Peng Sun and Xuanzhe Liu and Xin Jin",2024.0,,https://doi.org/10.48550/arXiv.2404.09526,10.48550/ARXIV.2404.09526,CoRR,"LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism",LoongServe: Efficiently Serving Long-Context Large Language ...,https://colab.ws/articles/10.1145%2F3694715.3695948,"LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism. Bingyang Wu 1. ,. Shengyu Liu 1. ,. Yinmin" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,DBLP:conf/sosp/KwonLZ0ZY0ZS23,\cite{DBLP:conf/sosp/KwonLZ0ZY0ZS23},"Efficient Memory Management for Large Language Model Serving with PagedAttention",http://arxiv.org/abs/2309.06180v1,"High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. However, existing systems struggle because the key-value cache (KV cache) memory for each request is huge and grows and shrinks dynamically. When managed inefficiently, this memory can be significantly wasted by fragmentation and redundant duplication, limiting the batch size. To address this problem, we propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce memory usage. Our evaluations show that vLLM improves the throughput of popular LLMs by 2-4$\times$ with the same level of latency compared to the state-of-the-art systems, such as FasterTransformer and Orca. The improvement is more pronounced with longer sequences, larger models, and more complex decoding algorithms. vLLM's source code is publicly available at https://github.com/vllm-project/vllm",True,True,"Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph Gonzalez and Hao Zhang and Ion Stoica",2023.0,,https://doi.org/10.1145/3600006.3613165,10.1145/3600006.3613165,,"Efficient Memory Management for Large Language Model Serving with PagedAttention",Efficient Memory Management for Large Language Model ...,https://arxiv.org/pdf/2309.06180,"Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon 1,∗ Zhuohan Li 1,∗ Siyuan Zhuang 1 Ying Sheng 1,2 Lianmin Zheng 1 Cody Hao Yu 3 Joseph E. Gonzalez 1 Hao Zhang 4 Ion Stoica 1 1 UC Berkeley 2Stanford University 3Independent Researcher 4UC San Diego Abstract High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. To address this problem, we propose PagedAttention, an attention al-gorithm inspired by the classical virtual memory and pag-ing techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce mem-ory usage. To address the above limitations, we propose PagedAt-tention , an attention algorithm inspired by the operating system’s (OS) solution to memory fragmentation and shar-ing: virtual memory with paging . In this work, we build vLLM , a high-throughput distributed LLM serving engine on top of PagedAttention that achieves near-zero waste in KV cache memory." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,alpaserve,\cite{alpaserve},"AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving",http://arxiv.org/abs/2302.11665v2,"Model parallelism is conventionally viewed as a method to scale a single large deep learning model beyond the memory limits of a single device. In this paper, we demonstrate that model parallelism can be additionally used for the statistical multiplexing of multiple devices when serving multiple models, even when a single model can fit into a single device. Our work reveals a fundamental trade-off between the overhead introduced by model parallelism and the opportunity to exploit statistical multiplexing to reduce serving latency in the presence of bursty workloads. We explore the new trade-off space and present a novel serving system, AlpaServe, that determines an efficient strategy for placing and parallelizing collections of large deep learning models across a distributed cluster. Evaluation results on production workloads show that AlpaServe can process requests at up to 10x higher rates or 6x more burstiness while staying within latency constraints for more than 99% of requests.",True,True,Zhuohan Li and Lianmin Zheng and Yinmin Zhong and Vincent Liu and Ying Sheng and Xin Jin and Yanping Huang and Zhifeng Chen and Hao Zhang and Joseph E. Gonzalez and Ion Stoica,2023.0,,https://www.usenix.org/conference/osdi23/presentation/li-zhouhan,,,"AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving",alpa-projects/mms: AlpaServe - GitHub,https://github.com/alpa-projects/mms,This is the official implementation of our OSDI'23 paper: AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving. To reproduce "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,DBLP:conf/osdi/YuJKKC22,\cite{DBLP:conf/osdi/YuJKKC22},"Orca: {A} Distributed Serving System for Transformer-Based Generative Models",,,True,False,"Gyeong{-}In Yu and Joo Seong Jeong and Geon{-}Woo Kim and Soojeong Kim and Byung{-}Gon Chun",2022.0,,https://www.usenix.org/conference/osdi22/presentation/yu,,,"Orca: {A} Distributed Serving System for Transformer-Based Generative Models",Orca: A Distributed Serving System for Transformer-Based ... - USENIX,https://www.usenix.org/conference/osdi22/presentation/yu,"We have implemented a distributed serving system called ORCA, with additional designs for scalability to models with hundreds of billions of parameters." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,DBLP:conf/isca/PatelCZSGMB24,\cite{DBLP:conf/isca/PatelCZSGMB24},Splitwise: Efficient generative LLM inference using phase splitting,http://arxiv.org/abs/2311.18677v2,"Recent innovations in generative large language models (LLMs) have made their applications and use-cases ubiquitous. This has led to large-scale deployments of these models, using complex, expensive, and power-hungry AI accelerators, most commonly GPUs. These developments make LLM inference efficiency an important challenge. Based on our extensive characterization, we find that there are two main phases during an LLM inference request: a compute-intensive prompt computation, and a memory-intensive token generation, each with distinct latency, throughput, memory, and power characteristics. Despite state-of-the-art batching and scheduling, the token generation phase underutilizes compute resources. Specifically, unlike compute-intensive prompt computation phases, token generation phases do not require the compute capability of the latest GPUs, and can be run with lower power and cost. With Splitwise, we propose splitting the two phases of a LLM inference request on to separate machines. This allows us to use hardware that is well-suited for each phase, and provision resources independently per phase. However, splitting an inference request across machines requires state transfer from the machine running prompt computation over to the machine generating tokens. We implement and optimize this state transfer using the fast back-plane interconnects available in today's GPU clusters. We use the Splitwise technique to design LLM inference clusters using the same or different types of machines for the prompt computation and token generation phases. Our clusters are optimized for three key objectives: throughput, cost, and power. In particular, we show that we can achieve 1.4x higher throughput at 20% lower cost than current designs. Alternatively, we can achieve 2.35x more throughput with the same cost and power budgets.",True,True,"Pratyush Patel and Esha Choukse and Chaojie Zhang and Aashaka Shah and {\'{I}}{\~{n}}igo Goiri and Saeed Maleki and Ricardo Bianchini",2024.0,,https://doi.org/10.1109/ISCA59077.2024.00019,10.1109/ISCA59077.2024.00019,,Splitwise: Efficient generative LLM inference using phase splitting,Splitwise: Efficient generative LLM inference using phase splitting,http://arxiv.org/pdf/2311.18677v2,"Recent innovations in generative large language models (LLMs) have made their applications and use-cases ubiquitous. This has led to large-scale deployments of these models, using complex, expensive, and power-hungry AI accelerators, most commonly GPUs. These developments make LLM inference efficiency an important challenge. Based on our extensive characterization, we find that there are two main phases during an LLM inference request: a compute-intensive prompt computation, and a memory-intensive token generation, each with distinct latency, throughput, memory, and power characteristics. Despite state-of-the-art batching and scheduling, the token generation phase underutilizes compute resources. Specifically, unlike compute-intensive prompt computation phases, token generation phases do not require the compute capability of the latest GPUs, and can be run with lower power and cost. With Splitwise, we propose splitting the two phases of a LLM inference request on to separate machines. This allows us to use hardware that is well-suited for each phase, and provision resources independently per phase. However, splitting an inference request across machines requires state transfer from the machine running prompt computation over to the machine generating tokens. We implement and optimize this state transfer using the fast back-plane interconnects available in today's GPU clusters. We use the Splitwise technique to design LLM inference clusters using the same or different types of machines for the prompt computation and token generation phases. Our clusters are optimized for three key objectives: throughput, cost, and power. In particular, we show that we can achieve 1.4x higher throughput at 20% lower cost than current designs. Alternatively, we can achieve 2.35x more throughput with the same cost and power budgets." "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,298501,\cite{298501},{Cost-Efficient} Large Language Model Serving for Multi-turn Conversations with {CachedAttention},,,True,False,Bin Gao and Zhuomin He and Puru Sharma and Qingxuan Kang and Djordje Jevdjic and Junbo Deng and Xingkun Yang and Zhou Yu and Pengfei Zuo,2024.0,,https://www.usenix.org/conference/atc24/presentation/gao-bin-cost,,,{Cost-Efficient} Large Language Model Serving for Multi-turn Conversations with {CachedAttention},Cost-Efficient Large Language Model Serving for Multi-turn ... - arXiv,https://arxiv.org/abs/2403.19708,"View a PDF of the paper titled Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention, by Bin Gao and 8 other authors To address the problem, this paper proposes CachedAttention, a new attention mechanism that enables reuse of KV caches across multi-turn conversations, significantly reducing the repetitive computation overheads. View a PDF of the paper titled Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention, by Bin Gao and 8 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,DBLP:journals/corr/abs-2412-17246,\cite{DBLP:journals/corr/abs-2412-17246},Fast and Live Model Auto Scaling with {O(1)} Host Caching,,,True,False,"Dingyan Zhang and Haotian Wang and Yang Liu and Xingda Wei and Yizhou Shan and Rong Chen and Haibo Chen",2024.0,,https://doi.org/10.48550/arXiv.2412.17246,10.48550/ARXIV.2412.17246,CoRR,Fast and Live Model Auto Scaling with {O(1)} Host Caching,Fast and Live Model Auto Scaling with 𝑂⁢(1) Host Caching,https://arxiv.org/html/2412.17246v1,"Model autoscaling is the key mechanism to achieve serverless model-as-a-service, but it faces a fundamental trade-off between scaling speed and storage/memory" "KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider",2506.02634v1,shahrad2020serverless,\cite{shahrad2020serverless},"Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider",http://arxiv.org/abs/2003.03423v3,"Function as a Service (FaaS) has been gaining popularity as a way to deploy computations to serverless backends in the cloud. This paradigm shifts the complexity of allocating and provisioning resources to the cloud provider, which has to provide the illusion of always-available resources (i.e., fast function invocations without cold starts) at the lowest possible resource cost. Doing so requires the provider to deeply understand the characteristics of the FaaS workload. Unfortunately, there has been little to no public information on these characteristics. Thus, in this paper, we first characterize the entire production FaaS workload of Azure Functions. We show for example that most functions are invoked very infrequently, but there is an 8-order-of-magnitude range of invocation frequencies. Using observations from our characterization, we then propose a practical resource management policy that significantly reduces the number of function coldstarts,while spending fewerresources than state-of-the-practice policies.",True,True,Mohammad Shahrad and Rodrigo Fonseca and Inigo Goiri and Gohar Chaudhry and Paul Batum and Jason Cooke and Eduardo Laureano and Colby Tresness and Mark Russinovich and Ricardo Bianchini,2020.0,,https://www.usenix.org/conference/atc20/presentation/shahrad,,,"Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider",Characterizing and Optimizing the Serverless Workload at ...,https://www.usenix.org/system/files/atc20-shahrad.pdf,"by M Shahrad · 2020 · Cited by 879 — This paper characterizes Azure Functions' serverless workload, showing most functions are invoked infrequently, and proposes a resource" "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,liu2024:visual,\cite{liu2024:visual},Visual Instruction Tuning,http://arxiv.org/abs/2304.08485v2,"Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.",True,True,"Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae",2024.0,,,,Advances in neural information processing systems,Visual Instruction Tuning,Visual Instruction Tuning,http://arxiv.org/pdf/2304.08485v2,"Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,bai2023:qwen,\cite{bai2023:qwen},"Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond",,,True,False,"Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren",2023.0,,,,,"Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond",Qwen-VL: A Versatile Vision-Language Model for Understanding...,https://openreview.net/forum?id=qrGjFJVl3m,"Despite the effort in open-sourcing the model and its weights, the reviewers find QWEN-VL lacking in significant research contributions and technical novelty. * _**Open-source:**_ Qwen-VL is an open-sourced large vision-language model that excels in **(i)** achieving leading performance across a wide range of vision-language understanding and generation tasks, **(ii)** offering multi-lingual support, particularly in English and Chinese, **(iii)** accommodating multi-image and high-resolution inputs, and **(iv)** demonstrating fine-grained visual perception abilities, particularly in scene text-oriented visual question-answering and visual grounding. Unlike previous representative vision-language models like PaLI-X, which leverages proprietary in-house data and utilize publicly inaccessible model weights (_e.g._, ViT-22B), along with significantly high training costs, our Qwen-VL's training process is more practical and holds considerable referential significance for future research." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,chen2023:sharegpt4v,\cite{chen2023:sharegpt4v},ShareGPT4V: Improving Large Multi-Modal Models with Better Captions,http://arxiv.org/abs/2311.12793v2,"In the realm of large multi-modal models (LMMs), efficient modality alignment is crucial yet often constrained by the scarcity of high-quality image-text data. To address this bottleneck, we introduce the ShareGPT4V dataset, a pioneering large-scale resource featuring 1.2 million highly descriptive captions, which surpasses existing datasets in diversity and information content, covering world knowledge, object properties, spatial relationships, and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated 100K high-quality captions collected from advanced GPT4-Vision and has been expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT) phase, by substituting an equivalent quantity of detailed captions in existing SFT datasets with a subset of our high-quality captions, significantly enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and 2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple architecture that has remarkable performance across a majority of the multi-modal benchmarks. This project is available at https://ShareGPT4V.github.io to serve as a pivotal resource for advancing the LMMs community.",True,True,"Chen, Lin and Li, Jisong and Dong, Xiaoyi and Zhang, Pan and He, Conghui and Wang, Jiaqi and Zhao, Feng and Lin, Dahua",2023.0,,,,arXiv preprint arXiv:2311.12793,ShareGPT4V: Improving Large Multi-Modal Models with Better Captions,Improving Large Multi-Modal Models with Better Captions - arXiv,https://arxiv.org/abs/2311.12793,"Image 4: arxiv logo>cs> arXiv:2311.12793 arXiv:2311.12793 (cs) View a PDF of the paper titled ShareGPT4V: Improving Large Multi-Modal Models with Better Captions, by Lin Chen and 7 other authors View a PDF of the paper titled ShareGPT4V: Improving Large Multi-Modal Models with Better Captions, by Lin Chen and 7 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,li2023:videochat,\cite{li2023:videochat},VideoChat: Chat-Centric Video Understanding,http://arxiv.org/abs/2305.06355v2,"In this paper, we initiate an attempt of developing an end-to-end chat-centric video understanding system, coined as VideoChat. It integrates video foundation models and large language models via a learnable neural interface, excelling in spatiotemporal reasoning, event localization, and causal relationship inference. To instructively tune this system, we build a video-centric instruction dataset, composed of thousands of videos associated with detailed descriptions and conversations. This dataset emphasizes spatiotemporal reasoning and captures causal relationships, providing a valuable asset for training our chat-centric video understanding system. Preliminary qualitative experiments demonstrate the potential of our system across a broad spectrum of video applications, which could serve as a simple prototype system for future research on chat-centric video understanding. Access our code and data at https://github.com/OpenGVLab/Ask-Anything",True,True,"Li, KunChang and He, Yinan and Wang, Yi and Li, Yizhuo and Wang, Wenhai and Luo, Ping and Wang, Yali and Wang, Limin and Qiao, Yu",2023.0,,,,arXiv preprint arXiv:2305.06355,VideoChat: Chat-Centric Video Understanding,VideoChat : Chat-Centric Video Understanding,https://img.shlab.org.cn/pjlab/files/2023/06/638215855649090000.pdf,"by KC Li · 2023 · Cited by 853 — VideoChat is an end-to-end chat-centric video understanding system integrating video and large language models, excelling in spatiotemporal reasoning and" "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,zhang2023:video,\cite{zhang2023:video},Video-llama: An instruction-tuned audio-visual language model for video understanding,,,True,False,"Zhang, Hang and Li, Xin and Bing, Lidong",2023.0,,,,arXiv preprint arXiv:2306.02858,Video-llama: An instruction-tuned audio-visual language model for video understanding,[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio ...,https://github.com/DAMO-NLP-SG/Video-LLaMA,"[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding # Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding The following checkpoints are the full weights (visual encoder + audio encoder + Q-Formers + language decoder) to launch Video-LLaMA: Firstly, set the `llama_model` (for the path to the language decoder), `imagebind_ckpt_path` (for the path to the audio encoder), `ckpt` (for the path to VL branch) and `ckpt_2` (for the path to AL branch) in eval\_configs/video\_llama\_eval\_withaudio.yaml accordingly. The training of each cross-modal branch (i.e., VL branch or AL branch) in Video-LLaMA consists of two stages, title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding}, [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding" "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,lu2024:unified,\cite{lu2024:unified},"Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action",http://arxiv.org/abs/2312.17172v1,"We present Unified-IO 2, the first autoregressive multimodal model that is capable of understanding and generating image, text, audio, and action. To unify different modalities, we tokenize inputs and outputs -- images, text, audio, action, bounding boxes, etc., into a shared semantic space and then process them with a single encoder-decoder transformer model. Since training with such diverse modalities is challenging, we propose various architectural improvements to stabilize model training. We train our model from scratch on a large multimodal pre-training corpus from diverse sources with a multimodal mixture of denoisers objective. To learn an expansive set of skills, such as following multimodal instructions, we construct and finetune on an ensemble of 120 datasets with prompts and augmentations. With a single unified model, Unified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and strong results in more than 35 benchmarks, including image generation and understanding, natural language understanding, video and audio understanding, and robotic manipulation. We release all our models to the research community.",True,True,"Lu, Jiasen and Clark, Christopher and Lee, Sangho and Zhang, Zichen and Khosla, Savya and Marten, Ryan and Hoiem, Derek and Kembhavi, Aniruddha",2024.0,,,,,"Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action",Unified-IO 2: Scaling Autoregressive Multimodal Models with ...,https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_Unified-IO_2_Scaling_Autoregressive_Multimodal_Models_with_Vision_Language_Audio_CVPR_2024_paper.pdf,"by J Lu · 2024 · Cited by 210 — UNIFIED-IO 2 is a model that understands and generates image, text, audio, and action, using a single encoder-decoder model." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,achiam2023:gpt,\cite{achiam2023:gpt},Gpt-4 technical report,,,True,False,"Achiam, Josh and Adler, Steven and Agarwal, Sandhini and Ahmad, Lama and Akkaya, Ilge and Aleman, Florencia Leoni and Almeida, Diogo and Altenschmidt, Janko and Altman, Sam and Anadkat, Shyamal and others",2023.0,,,,arXiv preprint arXiv:2303.08774,Gpt-4 technical report,GPT-4 Technical Report,http://arxiv.org/pdf/2303.08774v6,"We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,busso2008:iemocap,\cite{busso2008:iemocap},IEMOCAP: Interactive emotional dyadic motion capture database,,,True,False,"Busso, Carlos and Bulut, Murtaza and Lee, Chi-Chun and Kazemzadeh, Abe and Mower, Emily and Kim, Samuel and Chang, Jeannette N and Lee, Sungbok and Narayanan, Shrikanth S",2008.0,,,,Language resources and evaluation,IEMOCAP: Interactive emotional dyadic motion capture database,IEMOCAP- Home,https://sail.usc.edu/iemocap/,"The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,zadeh2018:multimodal,\cite{zadeh2018:multimodal},Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph,,,True,False,"Zadeh, AmirAli Bagher and Liang, Paul Pu and Poria, Soujanya and Cambria, Erik and Morency, Louis-Philippe",2018.0,,,,,Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph,The MOSEI Dataset and Interpretable Dynamic Fusion,https://pliang279.github.io/papers/dap2018_mosei.pdf,"by PP Liang · Cited by 30 — In this paper we introduce CMU-Multimodal Opinion. Sentiment and Emotion Intensity (CMU-. MOSEI), the largest dataset for multimodal sentiment analysis and" "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,poria2019:meld,\cite{poria2019:meld},"MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations",http://arxiv.org/abs/1810.02508v6,"Emotion recognition in conversations is a challenging task that has recently gained popularity due to its potential applications. Until now, however, a large-scale multimodal multi-party emotional conversational database containing more than two speakers per dialogue was missing. Thus, we propose the Multimodal EmotionLines Dataset (MELD), an extension and enhancement of EmotionLines. MELD contains about 13,000 utterances from 1,433 dialogues from the TV-series Friends. Each utterance is annotated with emotion and sentiment labels, and encompasses audio, visual and textual modalities. We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations. The full dataset is available for use at http:// affective-meld.github.io.",True,True,"Poria, Soujanya and Hazarika, Devamanyu and Majumder, Navonil and Naik, Gautam and Cambria, Erik and Mihalcea, Rada",2019.0,,,,,"MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations",MELD: A Multimodal Multi-Party Dataset for Emotion ...,https://github.com/declare-lab/MELD,"* /data/MELD/train_sent_emo.csv - contains the utterances in the training set along with Sentiment and Emotion labels. * /data/MELD/dev_sent_emo.csv - contains the utterances in the dev set along with Sentiment and Emotion labels. * /data/MELD/test_sent_emo.csv - contains the utterances in the test set along with Sentiment and Emotion labels. * /data/MELD_Dyadic/train_sent_emo_dya.csv - contains the utterances in the training set of the dyadic variant of MELD along with Sentiment and Emotion labels. * /data/MELD_Dyadic/test_sent_emo_dya.csv - contains the utterances in the test set of the dyadic variant along with Sentiment and Emotion labels. Each utterance in a dialogue has been labeled by any of these seven emotions -- Neutral, Joyful, Peaceful, Powerful, Scared, Mad and Sad. The annotations are borrowed from the original dataset." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,han2023:champagne,\cite{han2023:champagne},CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos,http://arxiv.org/abs/2303.09713v2,"Visual information is central to conversation: body gestures and physical behaviour, for example, contribute to meaning that transcends words alone. To date, however, most neural conversational models are limited to just text. We introduce CHAMPAGNE, a generative model of conversations that can account for visual contexts. To train CHAMPAGNE, we collect and release YTD-18M, a large-scale corpus of 18M video-based dialogues. YTD-18M is constructed from web videos: crucial to our data collection pipeline is a pretrained language model that converts error-prone automatic transcripts to a cleaner dialogue format while maintaining meaning. Human evaluation reveals that YTD-18M is more sensible and specific than prior resources (MMDialog, 1M dialogues), while maintaining visual-groundedness. Experiments demonstrate that 1) CHAMPAGNE learns to conduct conversation from YTD-18M; and 2) when fine-tuned, it achieves state-of-the-art results on four vision-language tasks focused on real-world conversations. We release data, models, and code.",True,True,"Han, Seungju and Hessel, Jack and Dziri, Nouha and Choi, Yejin and Yu, Youngjae",2023.0,,,,,CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos,[PDF] Learning Real-world Conversation from Large-Scale Web Videos,https://openaccess.thecvf.com/content/ICCV2023/papers/Han_CHAMPAGNE_Learning_Real-world_Conversation_from_Large-Scale_Web_Videos_ICCV_2023_paper.pdf,"Figure 1: CHAMPAGNE is a generative model of real-world conversational frames trained on. YTD-18M, a dataset of 18M video-based dialogues." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,park2024:let,\cite{park2024:let},Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation,http://arxiv.org/abs/2406.07867v2,"In this paper, we introduce a novel Face-to-Face spoken dialogue model. It processes audio-visual speech from user input and generates audio-visual speech as the response, marking the initial step towards creating an avatar chatbot system without relying on intermediate text. To this end, we newly introduce MultiDialog, the first large-scale multimodal (i.e., audio and visual) spoken dialogue corpus containing 340 hours of approximately 9,000 dialogues, recorded based on the open domain dialogue dataset, TopicalChat. The MultiDialog contains parallel audio-visual recordings of conversation partners acting according to the given script with emotion annotations, which we expect to open up research opportunities in multimodal synthesis. Our Face-to-Face spoken dialogue model incorporates a textually pretrained large language model and adapts it into the audio-visual spoken dialogue domain by incorporating speech-text joint pretraining. Through extensive experiments, we validate the effectiveness of our model in facilitating a face-to-face conversation. Demo and data are available at https://multidialog.github.io and https://huggingface.co/datasets/IVLLab/MultiDialog, respectively.",True,True,"Park, Se Jin and Kim, Chae Won and Rha, Hyeongseop and Kim, Minsu and Hong, Joanna and Yeo, Jeong Hun and Ro, Yong Man",2024.0,,,,arXiv preprint arXiv:2406.07867,Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation,Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face...,https://openreview.net/forum?id=zby4Ade9CCF,"In this paper, we introduce a novel Face-to-Face spoken dialogue model. It processes audio-visual speech from user input and generates audio-visual speech as" "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,shafique2023:nonverbal,\cite{shafique2023:nonverbal},Nonverbal Communication Cue Recognition: A Pathway to More Accessible Communication,,,True,False,"Shafique, Zoya and Wang, Haiyan and Tian, Yingli",2023.0,,,,,Nonverbal Communication Cue Recognition: A Pathway to More Accessible Communication,[PDF] Nonverbal Communication Cue Recognition: A Pathway to More ...,https://openaccess.thecvf.com/content/CVPR2023W/WiCV/papers/Shafique_Nonverbal_Communication_Cue_Recognition_A_Pathway_to_More_Accessible_Communication_CVPRW_2023_paper.pdf,"Nonverbal communication cues (NVCs) include body language, facial expressions, and hand gestures, conveying emotions and attitudes." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,zhang2023:learning,\cite{zhang2023:learning},Learning Emotion Representations from Verbal and Nonverbal Communication,http://arxiv.org/abs/2305.13500v1,"Emotion understanding is an essential but highly challenging component of artificial general intelligence. The absence of extensively annotated datasets has significantly impeded advancements in this field. We present EmotionCLIP, the first pre-training paradigm to extract visual emotion representations from verbal and nonverbal communication using only uncurated data. Compared to numerical labels or descriptions used in previous methods, communication naturally contains emotion information. Furthermore, acquiring emotion representations from communication is more congruent with the human learning process. We guide EmotionCLIP to attend to nonverbal emotion cues through subject-aware context encoding and verbal emotion cues using sentiment-guided contrastive learning. Extensive experiments validate the effectiveness and transferability of EmotionCLIP. Using merely linear-probe evaluation protocol, EmotionCLIP outperforms the state-of-the-art supervised visual emotion recognition methods and rivals many multimodal approaches across various benchmarks. We anticipate that the advent of EmotionCLIP will address the prevailing issue of data scarcity in emotion understanding, thereby fostering progress in related domains. The code and pre-trained models are available at https://github.com/Xeaver/EmotionCLIP.",True,True,"Zhang, Sitao and Pan, Yimu and Wang, James Z",2023.0,,,,,Learning Emotion Representations from Verbal and Nonverbal Communication,Learning Emotion Representations from Verbal and Nonverbal Communication,http://arxiv.org/pdf/2305.13500v1,"Emotion understanding is an essential but highly challenging component of artificial general intelligence. The absence of extensively annotated datasets has significantly impeded advancements in this field. We present EmotionCLIP, the first pre-training paradigm to extract visual emotion representations from verbal and nonverbal communication using only uncurated data. Compared to numerical labels or descriptions used in previous methods, communication naturally contains emotion information. Furthermore, acquiring emotion representations from communication is more congruent with the human learning process. We guide EmotionCLIP to attend to nonverbal emotion cues through subject-aware context encoding and verbal emotion cues using sentiment-guided contrastive learning. Extensive experiments validate the effectiveness and transferability of EmotionCLIP. Using merely linear-probe evaluation protocol, EmotionCLIP outperforms the state-of-the-art supervised visual emotion recognition methods and rivals many multimodal approaches across various benchmarks. We anticipate that the advent of EmotionCLIP will address the prevailing issue of data scarcity in emotion understanding, thereby fostering progress in related domains. The code and pre-trained models are available at https://github.com/Xeaver/EmotionCLIP." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,cherakara2023:furchat,\cite{cherakara2023:furchat},"FurChat: An Embodied Conversational Agent using LLMs, Combining Open and Closed-Domain Dialogue with Facial Expressions",http://arxiv.org/abs/2308.15214v2,"We demonstrate an embodied conversational agent that can function as a receptionist and generate a mixture of open and closed-domain dialogue along with facial expressions, by using a large language model (LLM) to develop an engaging conversation. We deployed the system onto a Furhat robot, which is highly expressive and capable of using both verbal and nonverbal cues during interaction. The system was designed specifically for the National Robotarium to interact with visitors through natural conversations, providing them with information about the facilities, research, news, upcoming events, etc. The system utilises the state-of-the-art GPT-3.5 model to generate such information along with domain-general conversations and facial expressions based on prompt engineering.",True,True,"Cherakara, Neeraj and Varghese, Finny and Shabana, Sheena and Nelson, Nivan and Karukayil, Abhiram and Kulothungan, Rohith and Farhan, Mohammed Afil and Nesset, Birthe and Moujahid, Meriam and Dinkar, Tanvi and others",2023.0,,,,,"FurChat: An Embodied Conversational Agent using LLMs, Combining Open and Closed-Domain Dialogue with Facial Expressions",[PDF] FurChat: An Embodied Conversational Agent using LLMs ...,https://aclanthology.org/2023.sigdial-1.55.pdf,"FurChat is an embodied conversational agent using LLMs, combining open and closed-domain dialogue with facial expressions, and can function as a receptionist." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,lee2023:developing,\cite{lee2023:developing},"Developing Social Robots with Empathetic Non-Verbal Cues Using Large Language Models",http://arxiv.org/abs/2308.16529v1,"We propose augmenting the empathetic capacities of social robots by integrating non-verbal cues. Our primary contribution is the design and labeling of four types of empathetic non-verbal cues, abbreviated as SAFE: Speech, Action (gesture), Facial expression, and Emotion, in a social robot. These cues are generated using a Large Language Model (LLM). We developed an LLM-based conversational system for the robot and assessed its alignment with social cues as defined by human counselors. Preliminary results show distinct patterns in the robot's responses, such as a preference for calm and positive social emotions like 'joy' and 'lively', and frequent nodding gestures. Despite these tendencies, our approach has led to the development of a social robot capable of context-aware and more authentic interactions. Our work lays the groundwork for future studies on human-robot interactions, emphasizing the essential role of both verbal and non-verbal cues in creating social and empathetic robots.",True,True,"Lee, Yoon Kyung and Jung, Yoonwon and Kang, Gyuyi and Hahn, Sowon",2023.0,,,,arXiv preprint arXiv:2308.16529,"Developing Social Robots with Empathetic Non-Verbal Cues Using Large Language Models",Developing Social Robots with Empathetic Non-Verbal Cues Using ...,https://www.researchgate.net/publication/373552152_Developing_Social_Robots_with_Empathetic_Non-Verbal_Cues_Using_Large_Language_Models,We developed an LLM-based conversational system for the robot and assessed its alignment with social cues as defined by human counselors. Preliminary results "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,lin2023:one,\cite{lin2023:one},One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer,http://arxiv.org/abs/2303.16160v1,"Whole-body mesh recovery aims to estimate the 3D human body, face, and hands parameters from a single image. It is challenging to perform this task with a single network due to resolution issues, i.e., the face and hands are usually located in extremely small regions. Existing works usually detect hands and faces, enlarge their resolution to feed in a specific network to predict the parameter, and finally fuse the results. While this copy-paste pipeline can capture the fine-grained details of the face and hands, the connections between different parts cannot be easily recovered in late fusion, leading to implausible 3D rotation and unnatural pose. In this work, we propose a one-stage pipeline for expressive whole-body mesh recovery, named OSX, without separate networks for each part. Specifically, we design a Component Aware Transformer (CAT) composed of a global body encoder and a local face/hand decoder. The encoder predicts the body parameters and provides a high-quality feature map for the decoder, which performs a feature-level upsample-crop scheme to extract high-resolution part-specific features and adopt keypoint-guided deformable attention to estimate hand and face precisely. The whole pipeline is simple yet effective without any manual post-processing and naturally avoids implausible prediction. Comprehensive experiments demonstrate the effectiveness of OSX. Lastly, we build a large-scale Upper-Body dataset (UBody) with high-quality 2D and 3D whole-body annotations. It contains persons with partially visible bodies in diverse real-life scenarios to bridge the gap between the basic task and downstream applications.",True,True,"Lin, Jing and Zeng, Ailing and Wang, Haoqian and Zhang, Lei and Li, Yu",2023.0,,,,,One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer,IDEA-Research/OSX - GitHub,https://github.com/IDEA-Research/OSX,This repo is official PyTorch implementation of One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer (CVPR2023). We propose the first one- "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,dwivedi2024:tokenhmr,\cite{dwivedi2024:tokenhmr},"TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation",http://arxiv.org/abs/2404.16752v1,"We address the problem of regressing 3D human pose and shape from a single image, with a focus on 3D accuracy. The current best methods leverage large datasets of 3D pseudo-ground-truth (p-GT) and 2D keypoints, leading to robust performance. With such methods, we observe a paradoxical decline in 3D pose accuracy with increasing 2D accuracy. This is caused by biases in the p-GT and the use of an approximate camera projection model. We quantify the error induced by current camera models and show that fitting 2D keypoints and p-GT accurately causes incorrect 3D poses. Our analysis defines the invalid distances within which minimizing 2D and p-GT losses is detrimental. We use this to formulate a new loss Threshold-Adaptive Loss Scaling (TALS) that penalizes gross 2D and p-GT losses but not smaller ones. With such a loss, there are many 3D poses that could equally explain the 2D evidence. To reduce this ambiguity we need a prior over valid human poses but such priors can introduce unwanted bias. To address this, we exploit a tokenized representation of human pose and reformulate the problem as token prediction. This restricts the estimated poses to the space of valid poses, effectively providing a uniform prior. Extensive experiments on the EMDB and 3DPW datasets show that our reformulated keypoint loss and tokenization allows us to train on in-the-wild data while improving 3D accuracy over the state-of-the-art. Our models and code are available for research at https://tokenhmr.is.tue.mpg.de.",True,True,"Dwivedi, Sai Kumar and Sun, Yu and Patel, Priyanka and Feng, Yao and Black, Michael J",2024.0,,,,,"TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation",TokenHMR: Advancing Human Mesh Recovery with a ...,https://github.com/saidwivedi/TokenHMR,Our method has two stages: Tokenization: The encoder maps continuous poses to discrete pose tokens. TokenHMR: During the training of human pose "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,danvevcek2022emoca,\cite{danvevcek2022emoca},EMOCA: Emotion Driven Monocular Face Capture and Animation,http://arxiv.org/abs/2204.11312v1,"As 3D facial avatars become more widely used for communication, it is critical that they faithfully convey emotion. Unfortunately, the best recent methods that regress parametric 3D face models from monocular images are unable to capture the full spectrum of facial expression, such as subtle or extreme emotions. We find the standard reconstruction metrics used for training (landmark reprojection error, photometric error, and face recognition loss) are insufficient to capture high-fidelity expressions. The result is facial geometries that do not match the emotional content of the input image. We address this with EMOCA (EMOtion Capture and Animation), by introducing a novel deep perceptual emotion consistency loss during training, which helps ensure that the reconstructed 3D expression matches the expression depicted in the input image. While EMOCA achieves 3D reconstruction errors that are on par with the current best methods, it significantly outperforms them in terms of the quality of the reconstructed expression and the perceived emotional content. We also directly regress levels of valence and arousal and classify basic expressions from the estimated 3D face parameters. On the task of in-the-wild emotion recognition, our purely geometric approach is on par with the best image-based methods, highlighting the value of 3D geometry in analyzing human behavior. The model and code are publicly available at https://emoca.is.tue.mpg.de.",True,True,"Dan{\v{e}}{\v{c}}ek, Radek and Black, Michael J and Bolkart, Timo",2022.0,,,,,EMOCA: Emotion Driven Monocular Face Capture and Animation,EMOCA: Emotion Driven Monocular Face Capture and Animation,http://arxiv.org/pdf/2204.11312v1,"As 3D facial avatars become more widely used for communication, it is critical that they faithfully convey emotion. Unfortunately, the best recent methods that regress parametric 3D face models from monocular images are unable to capture the full spectrum of facial expression, such as subtle or extreme emotions. We find the standard reconstruction metrics used for training (landmark reprojection error, photometric error, and face recognition loss) are insufficient to capture high-fidelity expressions. The result is facial geometries that do not match the emotional content of the input image. We address this with EMOCA (EMOtion Capture and Animation), by introducing a novel deep perceptual emotion consistency loss during training, which helps ensure that the reconstructed 3D expression matches the expression depicted in the input image. While EMOCA achieves 3D reconstruction errors that are on par with the current best methods, it significantly outperforms them in terms of the quality of the reconstructed expression and the perceived emotional content. We also directly regress levels of valence and arousal and classify basic expressions from the estimated 3D face parameters. On the task of in-the-wild emotion recognition, our purely geometric approach is on par with the best image-based methods, highlighting the value of 3D geometry in analyzing human behavior. The model and code are publicly available at https://emoca.is.tue.mpg.de." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,yi2023:generating,\cite{yi2023:generating},Generating Holistic 3D Human Motion from Speech,http://arxiv.org/abs/2212.04420v2,"This work addresses the problem of generating 3D holistic body motions from human speech. Given a speech recording, we synthesize sequences of 3D body poses, hand gestures, and facial expressions that are realistic and diverse. To achieve this, we first build a high-quality dataset of 3D holistic body meshes with synchronous speech. We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately. The separated modeling stems from the fact that face articulation strongly correlates with human speech, while body poses and hand gestures are less correlated. Specifically, we employ an autoencoder for face motions, and a compositional vector-quantized variational autoencoder (VQ-VAE) for the body and hand motions. The compositional VQ-VAE is key to generating diverse results. Additionally, we propose a cross-conditional autoregressive model that generates body poses and hand gestures, leading to coherent and realistic motions. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. Our novel dataset and code will be released for research purposes at https://talkshow.is.tue.mpg.de.",True,True,"Yi, Hongwei and Liang, Hualin and Liu, Yifei and Cao, Qiong and Wen, Yandong and Bolkart, Timo and Tao, Dacheng and Black, Michael J",2023.0,,,,,Generating Holistic 3D Human Motion from Speech,Generating Holistic 3D Human Motion from Speech,http://arxiv.org/pdf/2212.04420v2,"This work addresses the problem of generating 3D holistic body motions from human speech. Given a speech recording, we synthesize sequences of 3D body poses, hand gestures, and facial expressions that are realistic and diverse. To achieve this, we first build a high-quality dataset of 3D holistic body meshes with synchronous speech. We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately. The separated modeling stems from the fact that face articulation strongly correlates with human speech, while body poses and hand gestures are less correlated. Specifically, we employ an autoencoder for face motions, and a compositional vector-quantized variational autoencoder (VQ-VAE) for the body and hand motions. The compositional VQ-VAE is key to generating diverse results. Additionally, we propose a cross-conditional autoregressive model that generates body poses and hand gestures, leading to coherent and realistic motions. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. Our novel dataset and code will be released for research purposes at https://talkshow.is.tue.mpg.de." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,wu2024:motionllm,\cite{wu2024:motionllm},MotionLLM: Multimodal Motion-Language Learning with Large Language Models,,,True,False,"Wu, Qi and Zhao, Yubo and Wang, Yifan and Tai, Yu-Wing and Tang, Chi-Keung",2024.0,,,,arXiv preprint arXiv:2405.17013,MotionLLM: Multimodal Motion-Language Learning with Large Language Models,(PDF) MotionLLM: Multimodal Motion-Language Learning ...,https://www.researchgate.net/publication/380906869_MotionLLM_Multimodal_Motion-Language_Learning_with_Large_Language_Models,MotionGPT-2 accommodates multiple motion-relevant tasks and supporting multimodal control conditions through pre-trained Large Language Models ( "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,lu2023:humantomato,\cite{lu2023:humantomato},HumanTOMATO: Text-aligned Whole-body Motion Generation,http://arxiv.org/abs/2310.12978v1,"This work targets a novel text-driven whole-body motion generation task, which takes a given textual description as input and aims at generating high-quality, diverse, and coherent facial expressions, hand gestures, and body motions simultaneously. Previous works on text-driven motion generation tasks mainly have two limitations: they ignore the key role of fine-grained hand and face controlling in vivid whole-body motion generation, and lack a good alignment between text and motion. To address such limitations, we propose a Text-aligned whOle-body Motion generATiOn framework, named HumanTOMATO, which is the first attempt to our knowledge towards applicable holistic motion generation in this research area. To tackle this challenging task, our solution includes two key designs: (1) a Holistic Hierarchical VQ-VAE (aka H$^2$VQ) and a Hierarchical-GPT for fine-grained body and hand motion reconstruction and generation with two structured codebooks; and (2) a pre-trained text-motion-alignment model to help generated motion align with the input textual description explicitly. Comprehensive experiments verify that our model has significant advantages in both the quality of generated motions and their alignment with text.",True,True,"Lu, Shunlin and Chen, Ling-Hao and Zeng, Ailing and Lin, Jing and Zhang, Ruimao and Zhang, Lei and Shum, Heung-Yeung",2023.0,,,,arXiv preprint arXiv:2310.12978,HumanTOMATO: Text-aligned Whole-body Motion Generation,HumanTOMATO: Text-aligned Whole-body Motion ...,https://lhchen.top/HumanTOMATO/,"The proposed HumanTOMATO model can generate text-aligned whole-body motions with vivid and harmonious face, hand, and body motion." "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,ng2023:can,\cite{ng2023:can},Can Language Models Learn to Listen?,http://arxiv.org/abs/2308.10897v1,"We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words. Given an input transcription of the speaker's words with their timestamps, our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE. Since gesture is a language component, we propose treating the quantized atomic motion elements as additional language token inputs to a transformer-based large language model. Initializing our transformer with the weights of a language model pre-trained only on text results in significantly higher quality listener responses than training a transformer from scratch. We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study. In our evaluation, we analyze the model's ability to utilize temporal and semantic aspects of spoken text. Project page: https://people.eecs.berkeley.edu/~evonne_ng/projects/text2listen/",True,True,"Ng, Evonne and Subramanian, Sanjay and Klein, Dan and Kanazawa, Angjoo and Darrell, Trevor and Ginosar, Shiry",2023.0,,,,,Can Language Models Learn to Listen?,Can Language Models Learn to Listen?,http://arxiv.org/pdf/2308.10897v1,"We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words. Given an input transcription of the speaker's words with their timestamps, our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE. Since gesture is a language component, we propose treating the quantized atomic motion elements as additional language token inputs to a transformer-based large language model. Initializing our transformer with the weights of a language model pre-trained only on text results in significantly higher quality listener responses than training a transformer from scratch. We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study. In our evaluation, we analyze the model's ability to utilize temporal and semantic aspects of spoken text. Project page: https://people.eecs.berkeley.edu/~evonne_ng/projects/text2listen/" "Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues",2506.00958v1,ng2022:learning,\cite{ng2022:learning},Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion,http://arxiv.org/abs/2204.08451v1,"We present a framework for modeling interactional communication in dyadic conversations: given multimodal inputs of a speaker, we autoregressively output multiple possibilities of corresponding listener motion. We combine the motion and speech audio of the speaker using a motion-audio cross attention transformer. Furthermore, we enable non-deterministic prediction by learning a discrete latent representation of realistic listener motion with a novel motion-encoding VQ-VAE. Our method organically captures the multimodal and non-deterministic nature of nonverbal dyadic interactions. Moreover, it produces realistic 3D listener facial motion synchronous with the speaker (see video). We demonstrate that our method outperforms baselines qualitatively and quantitatively via a rich suite of experiments. To facilitate this line of research, we introduce a novel and large in-the-wild dataset of dyadic conversations. Code, data, and videos available at https://evonneng.github.io/learning2listen/.",True,True,"Ng, Evonne and Joo, Hanbyul and Hu, Liwen and Li, Hao and Darrell, Trevor and Kanazawa, Angjoo and Ginosar, Shiry",2022.0,,,,,Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion,[PDF] Learning To Listen: Modeling Non-Deterministic Dyadic Facial Motion,https://openaccess.thecvf.com/content/CVPR2022/papers/Ng_Learning_To_Listen_Modeling_Non-Deterministic_Dyadic_Facial_Motion_CVPR_2022_paper.pdf,"The method synthesizes listener motion from speaker video using a motion-audio transformer and a VQ-VAE, outputting multiple possibilities of listener motion." "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,strom2006expressive,\cite{strom2006expressive},Expressive prosody for unit-selection speech synthesis.,,,True,False,"Strom, Volker and Clark, Robert AJ and King, Simon",2006.0,,,,,Expressive prosody for unit-selection speech synthesis.,Expressive Prosody for Unit-selection Speech Synthesis - CSTR,https://www.cstr.ed.ac.uk/downloads/publications/2006/strom06.pdf,"by V Strom · Cited by 42 — The Festival unit selection speech synthesis system, Multisyn [1], achieves highly natural synthetic speech by avoiding use of an ex- plicit model of prosody in" "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,ren2019fastspeech,\cite{ren2019fastspeech},"FastSpeech: Fast, Robust and Controllable Text to Speech",http://arxiv.org/abs/1905.09263v5,"Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram using vocoder such as WaveNet. Compared with traditional concatenative and statistical parametric approaches, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i.e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control). In this work, we propose a novel feed-forward network based on Transformer to generate mel-spectrogram in parallel for TTS. Specifically, we extract attention alignments from an encoder-decoder based teacher model for phoneme duration prediction, which is used by a length regulator to expand the source phoneme sequence to match the length of the target mel-spectrogram sequence for parallel mel-spectrogram generation. Experiments on the LJSpeech dataset show that our parallel model matches autoregressive models in terms of speech quality, nearly eliminates the problem of word skipping and repeating in particularly hard cases, and can adjust voice speed smoothly. Most importantly, compared with autoregressive Transformer TTS, our model speeds up mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x. Therefore, we call our model FastSpeech.",True,True,"Ren, Yi and Ruan, Yangjun and Tan, Xu and Qin, Tao and Zhao, Sheng and Zhao, Zhou and Liu, Tie-Yan",2019.0,,,,Advances in neural information processing systems,"FastSpeech: Fast, Robust and Controllable Text to Speech","FastSpeech: Fast, Robust and Controllable Text to Speech",http://arxiv.org/pdf/1905.09263v5,"Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram using vocoder such as WaveNet. Compared with traditional concatenative and statistical parametric approaches, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i.e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control). In this work, we propose a novel feed-forward network based on Transformer to generate mel-spectrogram in parallel for TTS. Specifically, we extract attention alignments from an encoder-decoder based teacher model for phoneme duration prediction, which is used by a length regulator to expand the source phoneme sequence to match the length of the target mel-spectrogram sequence for parallel mel-spectrogram generation. Experiments on the LJSpeech dataset show that our parallel model matches autoregressive models in terms of speech quality, nearly eliminates the problem of word skipping and repeating in particularly hard cases, and can adjust voice speed smoothly. Most importantly, compared with autoregressive Transformer TTS, our model speeds up mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x. Therefore, we call our model FastSpeech." "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,ren2020fastspeech,\cite{ren2020fastspeech},FastSpeech 2: Fast and High-Quality End-to-End Text to Speech,http://arxiv.org/abs/2006.04558v8,"Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. Audio samples are available at https://speechresearch.github.io/fastspeech2/.",True,True,"Ren, Yi and Hu, Chenxu and Tan, Xu and Qin, Tao and Zhao, Sheng and Zhao, Zhou and Liu, Tie-Yan",2020.0,,,,arXiv preprint arXiv:2006.04558,FastSpeech 2: Fast and High-Quality End-to-End Text to Speech,FastSpeech 2: Fast and High-Quality End-to-End Text to Speech,https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/articles/fastspeech-2-fast-and-high-quality-end-to-end-text-to-speech/,FastSpeech 2 outperforms FastSpeech in voice quality and enjoys a much simpler training pipeline (3x training time reduction) while inheriting its advantages. "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,mohan2021ctrl,\cite{mohan2021ctrl},Ctrl-P: Temporal control of prosodic variation for speech synthesis,,,True,False,"Mohan, Devang S Ram and Hu, Vivian and Teh, Tian Huey and Torresquintero, Alexandra and Wallis, Christopher GR and Staib, Marlene and Foglianti, Lorenzo and Gao, Jiameng and King, Simon",2021.0,,,,arXiv preprint arXiv:2106.08352,Ctrl-P: Temporal control of prosodic variation for speech synthesis,Ctrl-P: Temporal Control of Prosodic Variation for Speech Synthesis,http://arxiv.org/pdf/2106.08352v1,"Text does not fully specify the spoken form, so text-to-speech models must be able to learn from speech data that vary in ways not explained by the corresponding text. One way to reduce the amount of unexplained variation in training data is to provide acoustic information as an additional learning signal. When generating speech, modifying this acoustic information enables multiple distinct renditions of a text to be produced. Since much of the unexplained variation is in the prosody, we propose a model that generates speech explicitly conditioned on the three primary acoustic correlates of prosody: $F_{0}$, energy and duration. The model is flexible about how the values of these features are specified: they can be externally provided, or predicted from text, or predicted then subsequently modified. Compared to a model that employs a variational auto-encoder to learn unsupervised latent features, our model provides more interpretable, temporally-precise, and disentangled control. When automatically predicting the acoustic features from text, it generates speech that is more natural than that from a Tacotron 2 model with reference encoder. Subsequent human-in-the-loop modification of the predicted acoustic features can significantly further increase naturalness." "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,bandekar2023speaking,\cite{bandekar2023speaking},Speaking rate attention-based duration prediction for speed control TTS,http://arxiv.org/abs/2310.08846v1,"With the advent of high-quality speech synthesis, there is a lot of interest in controlling various prosodic attributes of speech. Speaking rate is an essential attribute towards modelling the expressivity of speech. In this work, we propose a novel approach to control the speaking rate for non-autoregressive TTS. We achieve this by conditioning the speaking rate inside the duration predictor, allowing implicit speaking rate control. We show the benefits of this approach by synthesising audio at various speaking rate factors and measuring the quality of speaking rate-controlled synthesised speech. Further, we study the effect of the speaking rate distribution of the training data towards effective rate control. Finally, we fine-tune a baseline pretrained TTS model to obtain speaking rate control TTS. We provide various analyses to showcase the benefits of using this proposed approach, along with objective as well as subjective metrics. We find that the proposed methods have higher subjective scores and lower speaker rate errors across many speaking rate factors over the baseline.",True,True,"Bandekar, Jesuraj and Udupa, Sathvik and Singh, Abhayjeet and Jayakumar, Anjali and Badiger, Sandhya and Kumar, Saurabh and VH, Pooja and Ghosh, Prasanta Kumar and others",2023.0,,,,arXiv preprint arXiv:2310.08846,Speaking rate attention-based duration prediction for speed control TTS,Speaking Rate Control of end-to-end TTS Models by Direct ...,https://www.isca-archive.org/interspeech_2022/lenglet22_interspeech.pdf,by M Lenglet · 2022 · Cited by 8 — Evaluation was performed on the control of speaking rate on both attention-based (TC) and duration predictor based (FS) methods. Objective analyses showed "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,wang2018style,\cite{wang2018style},"Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis",http://arxiv.org/abs/1803.09017v1,"In this work, we propose ""global style tokens"" (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-to-end speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretable ""labels"" they generate can be used to control synthesis in novel ways, such as varying speed and speaking style - independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.",True,True,"Wang, Yuxuan and Stanton, Daisy and Zhang, Yu and Ryan, RJ-Skerry and Battenberg, Eric and Shor, Joel and Xiao, Ying and Jia, Ye and Ren, Fei and Saurous, Rif A",2018.0,,,,,"Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis","Unsupervised Style Modeling, Control and Transfer in End- ...",https://research.google/pubs/style-tokens-unsupervised-style-modeling-control-and-transfer-in-end-to-end-speech-synthesis/,"by Y Wang · Cited by 1080 — In this work, we propose “global style tokens”(GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-to-end speech" "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,skerry2018towards,\cite{skerry2018towards},"Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron",http://arxiv.org/abs/1803.09047v1,"We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task.",True,True,"Skerry-Ryan, RJ and Battenberg, Eric and Xiao, Ying and Wang, Yuxuan and Stanton, Daisy and Shor, Joel and Weiss, Ron and Clark, Rob and Saurous, Rif A",2018.0,,,,,"Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron",[PDF] Towards End-to-End Prosody Transfer for Expressive Speech ...,https://proceedings.mlr.press/v80/skerry-ryan18a/skerry-ryan18a.pdf,"Abstract. We present an extension to the Tacotron speech synthesis architecture that learns a latent embed- ding space of prosody, derived from a reference." "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,hsu2018hierarchical,\cite{hsu2018hierarchical},Hierarchical Generative Modeling for Controllable Speech Synthesis,http://arxiv.org/abs/1810.07217v2,"This paper proposes a neural sequence-to-sequence text-to-speech (TTS) model which can control latent attributes in the generated speech that are rarely annotated in the training data, such as speaking style, accent, background noise, and recording conditions. The model is formulated as a conditional generative model based on the variational autoencoder (VAE) framework, with two levels of hierarchical latent variables. The first level is a categorical variable, which represents attribute groups (e.g. clean/noisy) and provides interpretability. The second level, conditioned on the first, is a multivariate Gaussian variable, which characterizes specific attribute configurations (e.g. noise level, speaking rate) and enables disentangled fine-grained control over these attributes. This amounts to using a Gaussian mixture model (GMM) for the latent distribution. Extensive evaluation demonstrates its ability to control the aforementioned attributes. In particular, we train a high-quality controllable TTS model on real found data, which is capable of inferring speaker and style attributes from a noisy utterance and use it to synthesize clean speech with controllable speaking style.",True,True,"Hsu, Wei-Ning and Zhang, Yu and Weiss, Ron J and Zen, Heiga and Wu, Yonghui and Wang, Yuxuan and Cao, Yuan and Jia, Ye and Chen, Zhifeng and Shen, Jonathan and others",2018.0,,,,arXiv preprint arXiv:1810.07217,Hierarchical Generative Modeling for Controllable Speech Synthesis,Hierarchical Generative Modeling for Controllable Speech Synthesis,http://arxiv.org/pdf/1810.07217v2,"This paper proposes a neural sequence-to-sequence text-to-speech (TTS) model which can control latent attributes in the generated speech that are rarely annotated in the training data, such as speaking style, accent, background noise, and recording conditions. The model is formulated as a conditional generative model based on the variational autoencoder (VAE) framework, with two levels of hierarchical latent variables. The first level is a categorical variable, which represents attribute groups (e.g. clean/noisy) and provides interpretability. The second level, conditioned on the first, is a multivariate Gaussian variable, which characterizes specific attribute configurations (e.g. noise level, speaking rate) and enables disentangled fine-grained control over these attributes. This amounts to using a Gaussian mixture model (GMM) for the latent distribution. Extensive evaluation demonstrates its ability to control the aforementioned attributes. In particular, we train a high-quality controllable TTS model on real found data, which is capable of inferring speaker and style attributes from a noisy utterance and use it to synthesize clean speech with controllable speaking style." "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,lenglet2022speaking,\cite{lenglet2022speaking},Speaking Rate Control of end-to-end TTS Models by Direct Manipulation of the Encoder's Output Embeddings,,,True,False,"Lenglet, Martin and Perrotin, Olivier and Bailly, G{\'e}rard",2022.0,,,,,Speaking Rate Control of end-to-end TTS Models by Direct Manipulation of the Encoder's Output Embeddings,Speaking Rate Control of end-to-end TTS Models by ... - ISCA Archive,https://www.isca-archive.org/interspeech_2022/lenglet22_interspeech.html,Experimental results show that the control provided by embeddings reproduces a behaviour closer to natural speech data. "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,zhang2020unified,\cite{zhang2020unified},Unified Mandarin TTS Front-end Based on Distilled BERT Model,http://arxiv.org/abs/2012.15404v1,"The front-end module in a typical Mandarin text-to-speech system (TTS) is composed of a long pipeline of text processing components, which requires extensive efforts to build and is prone to large accumulative model size and cascade errors. In this paper, a pre-trained language model (PLM) based model is proposed to simultaneously tackle the two most important tasks in TTS front-end, i.e., prosodic structure prediction (PSP) and grapheme-to-phoneme (G2P) conversion. We use a pre-trained Chinese BERT[1] as the text encoder and employ multi-task learning technique to adapt it to the two TTS front-end tasks. Then, the BERT encoder is distilled into a smaller model by employing a knowledge distillation technique called TinyBERT[2], making the whole model size 25% of that of benchmark pipeline models while maintaining competitive performance on both tasks. With the proposed the methods, we are able to run the whole TTS front-end module in a light and unified manner, which is more friendly to deployment on mobile devices.",True,True,"Zhang, Yang and Deng, Liqun and Wang, Yasheng",2020.0,,,,arXiv preprint arXiv:2012.15404,Unified Mandarin TTS Front-end Based on Distilled BERT Model,Unified Mandarin TTS Front-end Based on Distilled BERT Model,https://arxiv.org/abs/2012.15404,We use a pre-trained Chinese BERT[1] as the text encoder and employ multi-task learning technique to adapt it to the two TTS front-end tasks. "Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models",2506.00832v1,fong2022speech,\cite{fong2022speech},Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech,,,True,False,"Fong, Jason and Lyth, Daniel and Henter, Gustav Eje and Tang, Hao and King, Simon",2022.0,,,,,Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech,[PDF] using speech from non-target speakers for one-off correction of ...,https://www.research.ed.ac.uk/files/364801102/Speech_Audio_Corrector_FONG_DOA13062022_VOR.pdf,Missing: 04/08/2025 Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,yoo2022ground,\cite{yoo2022ground},"Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations",http://arxiv.org/abs/2205.12685v2,"Despite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive. Intuitively, ground-truth labels should have as much impact in in-context learning (ICL) as supervised learning, but recent work reported that the input-label correspondence is significantly less important than previously thought. Intrigued by this counter-intuitive observation, we re-examine the importance of ground-truth labels in in-context learning. With the introduction of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the impact of ground-truth label demonstrations. Through extensive analyses, we find that the correct input-label mappings can have varying impacts on the downstream in-context learning performances, depending on the experimental configuration. Through additional studies, we identify key components, such as the verbosity of prompt templates and the language model size, as the controlling factor to achieve more noise-resilient ICL.",True,True,"Yoo, Kang Min and Kim, Junyeob and Kim, Hyuhng Joon and Cho, Hyunsoo and Jo, Hwiyeol and Lee, Sang-Woo and Lee, Sang-goo and Kim, Taeuk",2022.0,,,,,"Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations",Ground-Truth Labels Matter: A Deeper Look into Input- ...,https://aclanthology.org/2022.emnlp-main.155.pdf,"by KM Yoo · 2022 · Cited by 100 — We propose two new quantifiable metrics, sen- sitivity and GLER, to measure the impact of ground-truth label demonstrations on ICL. • We conduct" Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,o2023contrastive,\cite{o2023contrastive},Contrastive Decoding Improves Reasoning in Large Language Models,http://arxiv.org/abs/2309.09117v2,"We demonstrate that Contrastive Decoding -- a simple, computationally light, and training-free text generation method proposed by Li et al 2022 -- achieves large out-of-the-box improvements over greedy decoding on a variety of reasoning tasks. Originally shown to improve the perceived quality of long-form text generation, Contrastive Decoding searches for strings that maximize a weighted difference in likelihood between strong and weak models. We show that Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM 2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA 2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in addition to improvements on a collection of other tasks. Analysis suggests that Contrastive Decoding improves over existing methods by preventing some abstract reasoning errors, as well as by avoiding simpler modes such as copying sections of the input during chain-of-thought. Overall, Contrastive Decoding outperforms nucleus sampling for long-form generation and greedy decoding for reasoning tasks, making it a powerful general purpose method for generating text from language models.",True,True,"O'Brien, Sean and Lewis, Mike",2023.0,,,,arXiv preprint arXiv:2309.09117,Contrastive Decoding Improves Reasoning in Large Language Models,Contrastive Decoding Improves Reasoning in Large Language Models,http://arxiv.org/pdf/2309.09117v2,"We demonstrate that Contrastive Decoding -- a simple, computationally light, and training-free text generation method proposed by Li et al 2022 -- achieves large out-of-the-box improvements over greedy decoding on a variety of reasoning tasks. Originally shown to improve the perceived quality of long-form text generation, Contrastive Decoding searches for strings that maximize a weighted difference in likelihood between strong and weak models. We show that Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM 2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA 2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in addition to improvements on a collection of other tasks. Analysis suggests that Contrastive Decoding improves over existing methods by preventing some abstract reasoning errors, as well as by avoiding simpler modes such as copying sections of the input during chain-of-thought. Overall, Contrastive Decoding outperforms nucleus sampling for long-form generation and greedy decoding for reasoning tasks, making it a powerful general purpose method for generating text from language models." Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,li2023unified,\cite{li2023unified},Unified Demonstration Retriever for In-Context Learning,http://arxiv.org/abs/2305.04320v2,"In-context learning is a new learning paradigm where a language model conditions on a few input-output pairs (demonstrations) and a test input, and directly outputs the prediction. It has been shown highly dependent on the provided demonstrations and thus promotes the research of demonstration retrieval: given a test input, relevant examples are retrieved from the training set to serve as informative demonstrations for in-context learning. While previous works focus on training task-specific retrievers for several tasks separately, these methods are often hard to transfer and scale on various tasks, and separately trained retrievers incur a lot of parameter storage and deployment cost. In this paper, we propose Unified Demonstration Retriever (\textbf{UDR}), a single model to retrieve demonstrations for a wide range of tasks. To train UDR, we cast various tasks' training signals into a unified list-wise ranking formulation by language model's feedback. Then we propose a multi-task list-wise ranking training framework, with an iterative mining strategy to find high-quality candidates, which can help UDR fully incorporate various tasks' signals. Experiments on 30+ tasks across 13 task families and multiple data domains show that UDR significantly outperforms baselines. Further analyses show the effectiveness of each proposed component and UDR's strong ability in various scenarios including different LMs (1.3B - 175B), unseen datasets, varying demonstration quantities, etc.",True,True,"Li, Xiaonan and Lv, Kai and Yan, Hang and Lin, Tianyang and Zhu, Wei and Ni, Yuan and Xie, Guotong and Wang, Xiaoling and Qiu, Xipeng",2023.0,,,,,Unified Demonstration Retriever for In-Context Learning,Unified Demonstration Retriever for In-Context Learning,https://aclanthology.org/2023.acl-long.256/,"In this paper, we propose Unified Demonstration Retriever (UDR), a single model to retrieve demonstrations for a wide range of tasks." Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,liucontext,\cite{liucontext},"In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering",http://arxiv.org/abs/2311.06668v3,"Large language models (LLMs) demonstrate emergent in-context learning capabilities, where they adapt to new tasks based on example demonstrations. However, in-context learning has seen limited effectiveness in many settings, is difficult to quantitatively control and takes up context window space. To overcome these limitations, we propose an alternative approach that recasts in-context learning as in-context vectors (ICV). Using ICV has two steps. We first use a forward pass on demonstration examples to create the in-context vector from the latent embedding of the LLM. This vector captures essential information about the intended task. On a new query, instead of adding demonstrations to the prompt, we shift the latent states of the LLM using the ICV. The ICV approach has several benefits: 1) it enables the LLM to more effectively follow the demonstration examples; 2) it's easy to control by adjusting the magnitude of the ICV; 3) it reduces the length of the prompt by removing the in-context demonstrations; 4) ICV is computationally much more efficient than fine-tuning. We demonstrate that ICV achieves better performance compared to standard in-context learning and fine-tuning on diverse tasks including safety, style transfer, role-playing and formatting. Moreover, we show that we can flexibly teach LLM to simultaneously follow different types of instructions by simple vector arithmetics on the corresponding ICVs.",True,True,"Liu, Sheng and Ye, Haotian and Xing, Lei and Zou, James Y",,,,,,"In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering",Making In Context Learning More Effective and ...,https://consensus.app/papers/incontext-vectors-making-in-context-learning-more-zou-liu/20a28c8387155fa1ac876aad9841f1ee,"Key takeaway: 'In-context vectors (ICV) improve in-context learning effectiveness, controllability, and computational efficiency in large" Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,min2022rethinking,\cite{min2022rethinking},"Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?",http://arxiv.org/abs/2202.12837v2,"Large language models (LMs) are able to in-context learn -- perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of how the model learns and which aspects of the demonstrations contribute to end task performance. In this paper, we show that ground truth demonstrations are in fact not required -- randomly replacing labels in the demonstrations barely hurts performance on a range of classification and multi-choce tasks, consistently over 12 different models including GPT-3. Instead, we find that other aspects of the demonstrations are the key drivers of end task performance, including the fact that they provide a few examples of (1) the label space, (2) the distribution of the input text, and (3) the overall format of the sequence. Together, our analysis provides a new way of understanding how and why in-context learning works, while opening up new questions about how much can be learned from large language models through inference alone.",True,True,"Min, Sewon and Lyu, Xinxi and Holtzman, Ari and Artetxe, Mikel and Lewis, Mike and Hajishirzi, Hannaneh and Zettlemoyer, Luke",2022.0,,,,arXiv preprint arXiv:2202.12837,"Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?",[PDF] What Makes In-Context Learning Work? - ACL Anthology,https://aclanthology.org/2022.emnlp-main.759.pdf,Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? Large language models (LMs) are able to in- context learn—perform a new task via Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,kang2024context,\cite{kang2024context},In-Context Learning with Noisy Labels,http://arxiv.org/abs/2411.19581v1,"In-context learning refers to the emerging ability of large language models (LLMs) to perform a target task without additional training, utilizing demonstrations of the task. Recent studies aim to enhance in-context learning performance by selecting more useful demonstrations. However, they overlook the presence of inevitable noisy labels in task demonstrations that arise during the labeling process in the real-world. In this paper, we propose a new task, in-context learning with noisy labels, which aims to solve real-world problems for in-context learning where labels in task demonstrations would be corrupted. Moreover, we propose a new method and baseline methods for the new task, inspired by studies in learning with noisy labels. Through experiments, we demonstrate that our proposed method can serve as a safeguard against performance degradation in in-context learning caused by noisy labels.",True,True,"Kang, Junyong and Son, Donghyun and Song, Hwanjun and Chang, Buru",2024.0,,,,arXiv preprint arXiv:2411.19581,In-Context Learning with Noisy Labels,[2411.19581] In-Context Learning with Noisy Labels - arXiv,https://arxiv.org/abs/2411.19581,"In this paper, we propose a new task, in-context learning with noisy labels, which aims to solve real-world problems for in-context learning." Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,gao2024noise,\cite{gao2024noise},On the Noise Robustness of In-Context Learning for Text Generation,http://arxiv.org/abs/2405.17264v3,"Large language models (LLMs) have shown impressive performance on downstream tasks by in-context learning (ICL), which heavily relies on the quality of demonstrations selected from a large set of annotated examples. Recent works claim that in-context learning is robust to noisy demonstrations in text classification. In this work, we show that, on text generation tasks, noisy annotations significantly hurt the performance of in-context learning. To circumvent the issue, we propose a simple and effective approach called Local Perplexity Ranking (LPR), which replaces the ""noisy"" candidates with their nearest neighbors that are more likely to be clean. Our method is motivated by analyzing the perplexity deviation caused by noisy labels and decomposing perplexity into inherent perplexity and matching perplexity. Our key idea behind LPR is thus to decouple the matching perplexity by performing the ranking among the neighbors in semantic space. Our approach can prevent the selected demonstrations from including mismatched input-label pairs while preserving the effectiveness of the original selection methods. Extensive experiments demonstrate the effectiveness of LPR, improving the EM score by up to 18.75 on common benchmarks with noisy annotations. Our code is available at https://github.com/ml-stat-Sustech/Local-Perplexity-Ranking.",True,True,"Gao, Hongfu and Zhang, Feipeng and Jiang, Wenyu and Shu, Jun and Zheng, Feng and Wei, Hongxin",2024.0,,,,,On the Noise Robustness of In-Context Learning for Text Generation,On the Noise Robustness of In-Context Learning for Text ...,https://openreview.net/forum?id=00uVk06eVK&referrer=%5Bthe%20profile%20of%20Hongxin%20Wei%5D(%2Fprofile%3Fid%3D~Hongxin_Wei1),"The paper ""On the Noise Robustness of In-Context Learning for Text Generation"" investigates how LLMs handle noisy annotations during in-context" Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,li2022contrastive,\cite{li2022contrastive},Contrastive Decoding: Open-ended Text Generation as Optimization,http://arxiv.org/abs/2210.15097v2,"Given a language model (LM), maximum probability is a poor decoding objective for open-ended generation, because it produces short and repetitive text. On the other hand, sampling can often produce incoherent text that drifts from the original topics. We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint. The contrastive objective returns the difference between the likelihood under a large LM (called the expert, e.g. OPT-13B) and a small LM (called the amateur, e.g. OPT-125M), and the constraint ensures that the outputs are plausible. CD is inspired by the fact that the failures of larger LMs (e.g., repetition, incoherence) are even more prevalent in smaller LMs, and that this difference signals which texts should be preferred. CD requires zero additional training, and produces higher quality text than decoding from the larger LM alone. It also works across model scales (OPT-13B and GPT2-1.5B) and significantly outperforms four strong decoding algorithms (e.g., nucleus, top-k) in automatic and human evaluations across wikipedia, news and story domains.",True,True,"Li, Xiang Lisa and Holtzman, Ari and Fried, Daniel and Liang, Percy and Eisner, Jason and Hashimoto, Tatsunori and Zettlemoyer, Luke and Lewis, Mike",2022.0,,,,arXiv preprint arXiv:2210.15097,Contrastive Decoding: Open-ended Text Generation as Optimization,Contrastive Decoding: Open-ended Text Generation as Optimization,https://arxiv.org/abs/2210.15097,"We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint." Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,zhao2024enhancing,\cite{zhao2024enhancing},"Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding",http://arxiv.org/abs/2405.02750v1,"Large language models (LLMs) tend to inadequately integrate input context during text generation, relying excessively on encoded prior knowledge in model parameters, potentially resulting in generated text with factual inconsistencies or contextually unfaithful content. LLMs utilize two primary knowledge sources: 1) prior (parametric) knowledge from pretraining, and 2) contextual (non-parametric) knowledge from input prompts. The study addresses the open question of how LLMs effectively balance these knowledge sources during the generation process, specifically in the context of open-domain question answering. To address this issue, we introduce a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding during generation. Notably, our method operates at inference time without requiring further training. We conduct comprehensive experiments to demonstrate its applicability and effectiveness, providing empirical evidence showcasing its superiority over existing methodologies. Our code is publicly available at: https://github.com/amazon-science/ContextualUnderstanding-ContrastiveDecoding.",True,True,"Zhao, Zheng and Monti, Emilio and Lehmann, Jens and Assem, Haytham",2024.0,,,,arXiv preprint arXiv:2405.02750,"Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding",Enhancing Contextual Understanding in Large Language Models ...,https://aclanthology.org/2024.naacl-long.237/,We introduce a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,fei2023mitigating,\cite{fei2023mitigating},Mitigating Label Biases for In-context Learning,http://arxiv.org/abs/2305.19148v3,"Various design settings for in-context learning (ICL), such as the choice and order of the in-context examples, can bias a model toward a particular prediction without being reflective of an understanding of the task. While many studies discuss these design choices, there have been few systematic investigations into categorizing them and mitigating their impact. In this work, we define a typology for three types of label biases in ICL for text classification: vanilla-label bias, context-label bias, and domain-label bias (which we conceptualize and detect for the first time). Our analysis demonstrates that prior label bias calibration methods fall short of addressing all three types of biases. Specifically, domain-label bias restricts LLMs to random-level performance on many tasks regardless of the choice of in-context examples. To mitigate the effect of these biases, we propose a simple bias calibration method that estimates a language model's label bias using random in-domain words from the task corpus. After controlling for this estimated bias when making predictions, our novel domain-context calibration significantly improves the ICL performance of GPT-J and GPT-3 on a wide range of tasks. The gain is substantial on tasks with large domain-label bias (up to 37% in Macro-F1). Furthermore, our results generalize to models with different scales, pretraining methods, and manually-designed task instructions, showing the prevalence of label biases in ICL.",True,True,"Fei, Yu and Hou, Yifan and Chen, Zeming and Bosselut, Antoine",2023.0,,,,arXiv preprint arXiv:2305.19148,Mitigating Label Biases for In-context Learning,[2305.19148] Mitigating Label Biases for In-context Learning - arXiv,https://arxiv.org/abs/2305.19148,"In this work, we define a typology for three types of label biases in ICL for text classification: vanilla-label bias, context-label bias, and domain-label" Dual Debiasing for Noisy In-Context Learning for Text Generation,2506.00418v1,zhao2021calibrate,\cite{zhao2021calibrate},Calibrate Before Use: Improving Few-Shot Performance of Language Models,http://arxiv.org/abs/2102.09690v2,"GPT-3 can perform numerous tasks when provided a natural language prompt that contains a few training examples. We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near state-of-the-art. We demonstrate that this instability arises from the bias of language models towards predicting certain answers, e.g., those that are placed near the end of the prompt or are common in the pre-training data. To mitigate this, we first estimate the model's bias towards each answer by asking for its prediction when given the training prompt and a content-free test input such as ""N/A"". We then fit calibration parameters that cause the prediction for this input to be uniform across answers. On a diverse set of tasks, this contextual calibration procedure substantially improves GPT-3 and GPT-2's average accuracy (up to 30.0% absolute) and reduces variance across different choices of the prompt.",True,True,"Zhao, Zihao and Wallace, Eric and Feng, Shi and Klein, Dan and Singh, Sameer",2021.0,,,,,Calibrate Before Use: Improving Few-Shot Performance of Language Models,Calibrate Before Use: Improving Few-Shot Performance of ...,http://proceedings.mlr.press/v139/zhao21c/zhao21c.pdf,"by Z Zhao · 2021 · Cited by 1608 — Overall, contextual calibration is a simple method that makes language models better few-shot learners: it enables end users to obtain higher accuracy with." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,NIPS2013_9aa42b31,\cite{NIPS2013_9aa42b31},"Distributed Representations of Words and Phrases and their Compositionality",http://arxiv.org/abs/1310.4546v1,"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of ""Canada"" and ""Air"" cannot be easily combined to obtain ""Air Canada"". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",True,True,"Tom{\'{a}}s Mikolov and Ilya Sutskever and Kai Chen and Gregory S. Corrado and Jeffrey Dean",2013.0,,https://proceedings.neurips.cc/paper/2013/hash/9aa42b31882ec039965f3c4923ce901b-Abstract.html,,,"Distributed Representations of Words and Phrases and their Compositionality",[PDF] Distributed Representations of Words and Phrases and their ...,https://proceedings.neurips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf,"Distributed representations of words use vector spaces to group similar words, capturing syntactic and semantic relationships, and are limited by their" "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,pennington-etal-2014-glove,\cite{pennington-etal-2014-glove},Glove: Global Vectors for Word Representation,,,True,False,"Jeffrey Pennington and Richard Socher and Christopher D. Manning",2014.0,,https://doi.org/10.3115/v1/d14-1162,10.3115/V1/D14-1162,,Glove: Global Vectors for Word Representation,GloVe: Global Vectors for Word Representation,https://nlp.stanford.edu/projects/glove/,"GloVe: Global Vectors for Word Representation GloVe: Global Vectors for Word RepresentationJeffrey Pennington, Richard Socher, Christopher D. GloVe: Global Vectors for Word Representation. GloVe is designed in order that such vector differences capture as much as possible the meaning specified by the juxtaposition of two words. The GloVe model is trained on the non-zero entries of a global word-word co-occurrence matrix, which tabulates how frequently words co-occur with one another in a given corpus. The training objective of GloVe is to learn word vectors such that their dot product equals the logarithm of the words' probability of co-occurrence. This feature is not unique to GloVe -- in fact, I'm unaware of any model for word vector learning that avoids this issue." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,transformer,\cite{transformer},Attention Is All You Need,http://arxiv.org/abs/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",True,True,"Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit and Llion Jones and Aidan N. Gomez and Lukasz Kaiser and Illia Polosukhin",2017.0,,https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html,,,Attention Is All You Need,Attention Is All You Need,http://arxiv.org/pdf/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,devlin-etal-2019-bert,\cite{devlin-etal-2019-bert},"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",http://arxiv.org/abs/1810.04805v2,"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",True,True,"Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova",2019.0,,https://doi.org/10.18653/v1/n19-1423,10.18653/V1/N19-1423,,"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",[PDF] BERT: Pre-training of Deep Bidirectional Transformers for Language ...,https://aclanthology.org/N19-1423.pdf,"Unlike recent language repre-sentation models (Peters et al., 2018a; Rad-ford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a re-sult, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. More recently, sentence or document encoders which produce contextual token representations have been pre-trained from unlabeled text and fine-tuned for a supervised downstream task (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018)." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,cer-etal-2018-universal,\cite{cer-etal-2018-universal},Universal Sentence Encoder for English,,,True,False,"Daniel Cer and Yinfei Yang and Sheng{-}yi Kong and Nan Hua and Nicole Limtiaco and Rhomni St. John and Noah Constant and Mario Guajardo{-}Cespedes and Steve Yuan and Chris Tar and Brian Strope and Ray Kurzweil",2018.0,,https://doi.org/10.18653/v1/d18-2029,10.18653/V1/D18-2029,,Universal Sentence Encoder for English,[1803.11175] Universal Sentence Encoder - arXiv,https://arxiv.org/abs/1803.11175,We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,reimers-gurevych-2019-sentence,\cite{reimers-gurevych-2019-sentence},Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks,http://arxiv.org/abs/1908.10084v1,"BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.",True,True,"Nils Reimers and Iryna Gurevych",2019.0,,https://doi.org/10.18653/v1/D19-1410,10.18653/V1/D19-1410,,Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks,[PDF] Sentence Embeddings using Siamese BERT-Networks,https://aclanthology.org/D19-1410.pdf,"c ⃝2019 Association for Computational Linguistics 3982 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨ at Darmstadt www.ukp.tu-darmstadt.de Abstract BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). We fine-tune SBERT on NLI data, which cre-ates sentence embeddings that significantly out-perform other state-of-the-art sentence embedding methods like InferSent (Conneau et al., 2017) and Universal Sentence Encoder (Cer et al., 2018)." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,gao-etal-2021-simcse,\cite{gao-etal-2021-simcse},SimCSE: Simple Contrastive Learning of Sentence Embeddings,http://arxiv.org/abs/2104.08821v4,"This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using ""entailment"" pairs as positives and ""contradiction"" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.",True,True,"Tianyu Gao and Xingcheng Yao and Danqi Chen",2021.0,,https://doi.org/10.18653/v1/2021.emnlp-main.552,,,SimCSE: Simple Contrastive Learning of Sentence Embeddings,SimCSE: Simple Contrastive Learning of Sentence Embeddings,http://arxiv.org/pdf/2104.08821v4,"This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using ""entailment"" pairs as positives and ""contradiction"" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,zhuo-etal-2023-whitenedcse,\cite{zhuo-etal-2023-whitenedcse},WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings,,,True,False,"Wenjie Zhuo and Yifan Sun and Xiaohan Wang and Linchao Zhu and Yi Yang",2023.0,,https://doi.org/10.18653/v1/2023.acl-long.677,10.18653/V1/2023.ACL-LONG.677,,WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings,Whitening-based Contrastive Learning of Sentence Embeddings,https://aclanthology.org/2023.acl-long.677/,"This paper presents a whitening-based contrastive learning method for sentence embedding learning (WhitenedCSE), which combines contrastive learning with a" "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,wang2023improving,\cite{wang2023improving},Improving Text Embeddings with Large Language Models,http://arxiv.org/abs/2401.00368v3,"In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks.",True,True,"Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu",2023.0,,https://doi.org/10.48550/arXiv.2401.00368,,arXiv,Improving Text Embeddings with Large Language Models,Improving Text Embeddings with Large Language Models,http://arxiv.org/pdf/2401.00368v3,"In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,muennighoff2024generative,\cite{muennighoff2024generative},Generative Representational Instruction Tuning,http://arxiv.org/abs/2402.09906v3,"All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8x7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm.",True,True,"Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela",2025.0,,https://openreview.net/forum?id=BC4lIvfSzv,,,Generative Representational Instruction Tuning,Generative Representational Instruction Tuning,http://arxiv.org/pdf/2402.09906v3,"All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8x7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,lei-etal-2024-meta,\cite{lei-etal-2024-meta},Meta-Task Prompting Elicits Embeddings from Large Language Models,http://arxiv.org/abs/2402.18458v2,"We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) without the need for model fine-tuning. Leveraging meta-task prompting, MetaEOL guides LLMs to produce embeddings through a series of carefully designed prompts that address multiple representational aspects. Our comprehensive experiments demonstrate that embeddings averaged from various meta-tasks are versatile embeddings that yield competitive performance on Semantic Textual Similarity (STS) benchmarks and excel in downstream tasks, surpassing contrastive-trained models. Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.",True,True,"Yibin Lei and Di Wu and Tianyi Zhou and Tao Shen and Yu Cao and Chongyang Tao and Andrew Yates",2024.0,,https://doi.org/10.18653/v1/2024.acl-long.546,10.18653/V1/2024.ACL-LONG.546,,Meta-Task Prompting Elicits Embeddings from Large Language Models,[PDF] Meta-Task Prompting Elicits Embeddings from Large Language ...,https://aclanthology.org/2024.acl-long.546.pdf,"Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10141–10157 August 11-16, 2024 ©2024 Association for Computational Linguistics Meta-Task Prompting Elicits Embeddings from Large Language Models Yibin Lei1*, Di Wu1, Tianyi Zhou2, Tao Shen3, Yu Cao4, Chongyang Tao5*, Andrew Yates1 1University of Amsterdam 2University of Maryland 3AAII, FEIT, University of Technology Sydney 4Tencent IEG 5Microsoft Corporation {y.lei, d.wu, a.c.yates}@uva.nl, tianyi@umd.edu tao.shen@uts.edu.au, rainyucao@tencent.com, chotao@microsoft.com Abstract We introduce a new unsupervised text embed-ding method, Meta-Task Prompting with Ex-plicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) with-out the need for model fine-tuning." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,li-li-2024-aoe,\cite{li-li-2024-aoe},AoE: Angle-optimized Embeddings for Semantic Textual Similarity,,,True,False,"Xianming Li and Jing Li",2024.0,,https://doi.org/10.18653/v1/2024.acl-long.101,10.18653/V1/2024.ACL-LONG.101,,AoE: Angle-optimized Embeddings for Semantic Textual Similarity,AoE: Angle-optimized Embeddings for Semantic Textual Similarity,https://aclanthology.org/2024.acl-long.101/,"We propose a novel Angle-optimized Embedding model, AoE. It optimizes angle differences in complex space to explore similarity in saturation zones better." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,su-etal-2023-one,\cite{su-etal-2023-one},"One Embedder, Any Task: Instruction-Finetuned Text Embeddings",http://arxiv.org/abs/2212.09741v3,"We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate text embeddings tailored to different downstream tasks and domains, without any further training. We first annotate instructions for 330 diverse tasks and train INSTRUCTOR on this multitask mixture with a contrastive loss. We evaluate INSTRUCTOR on 70 embedding evaluation tasks (66 of which are unseen during training), ranging from classification and information retrieval to semantic textual similarity and text generation evaluation. INSTRUCTOR, while having an order of magnitude fewer parameters than the previous best model, achieves state-of-the-art performance, with an average improvement of 3.4% compared to the previous best results on the 70 diverse datasets. Our analysis suggests that INSTRUCTOR is robust to changes in instructions, and that instruction finetuning mitigates the challenge of training a single model on diverse datasets. Our model, code, and data are available at https://instructor-embedding.github.io.",True,True,"Su, Hongjin and Shi, Weijia and Kasai, Jungo and Wang, Yizhong and Hu, Yushi and Ostendorf, Mari and Yih, Wen-tau and Smith, Noah A. and Zettlemoyer, Luke and Yu, Tao",2023.0,,https://aclanthology.org/2023.findings-acl.71/,,,"One Embedder, Any Task: Instruction-Finetuned Text Embeddings","One Embedder, Any Task: Instruction-Finetuned Text Embeddings",https://aclanthology.org/2023.findings-acl.71/,"Anthology ID:2023.findings-acl.71 Volume:Findings of the Association for Computational Linguistics: ACL 2023Month:July Year:2023 Address:Toronto, Canada Editors:Anna Rogers, Jordan Boyd-Graber, Naoaki OkazakiVenue:FindingsSIG:Publisher:Association for Computational Linguistics Note:Pages:1102–1121 Language:URL:https://aclanthology.org/2023.findings-acl.71/DOI:10.18653/v1/2023.findings-acl.71Bibkey:su-etal-2023-one Cite (ACL):Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Association for Computational Linguistics.Cite (Informal):One Embedder, Any Task: Instruction-Finetuned Text Embeddings (Su et al., Findings 2023)Copy Citation:BibTeX Markdown MODS XML Endnote More options…PDF:https://aclanthology.org/2023.findings-acl.71.pdfVideo:https://aclanthology.org/2023.findings-acl.71.mp4 abstract = ""We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions)." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,peng-etal-2024-answer,\cite{peng-etal-2024-answer},"Answer is All You Need: Instruction-following Text Embedding via Answering the Question",http://arxiv.org/abs/2402.09642v1,"This work aims to build a text embedder that can capture characteristics of texts specified by user instructions. Despite its tremendous potential to deploy user-oriented embeddings, none of previous approaches provides a concrete solution for it. This paper offers a new viewpoint, which treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly. Intuitively, texts with the same (implicit) semantics would share similar answers following the instruction, thus leading to more similar embeddings. Specifically, we propose InBedder that instantiates this embed-via-answering idea by only fine-tuning language models on abstractive question answering tasks. InBedder demonstrates significantly improved instruction-following capabilities according to our proposed instruction awareness tests and instruction robustness tests, when applied to both large language models (LLMs) (e.g., llama-2-7b) and smaller encoder-based LMs (e.g., roberta-large). Additionally, our qualitative analysis of clustering outcomes, achieved by applying different instructions to the same corpus, demonstrates a high degree of interpretability.",True,True,"Letian Peng and Yuwei Zhang and Zilong Wang and Jayanth Srinivasa and Gaowen Liu and Zihan Wang and Jingbo Shang",2024.0,,https://doi.org/10.18653/v1/2024.acl-long.27,10.18653/V1/2024.ACL-LONG.27,,"Answer is All You Need: Instruction-following Text Embedding via Answering the Question",Answer is All You Need: Instruction-following Text ...,https://aclanthology.org/2024.acl-long.27/,by L Peng · 2024 · Cited by 11 — This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion.See more "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,weller2024promptriever,\cite{weller2024promptriever},"Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models",http://arxiv.org/abs/2409.11136v1,"Instruction-tuned language models (LM) are able to respond to imperative commands, providing a more natural user interface compared to their base counterparts. In this work, we present Promptriever, the first retrieval model able to be prompted like an LM. To train Promptriever, we curate and release a new instance-level instruction training set from MS MARCO, spanning nearly 500k instances. Promptriever not only achieves strong performance on standard retrieval tasks, but also follows instructions. We observe: (1) large gains (reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR / +3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical choices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR), and (3) the ability to perform hyperparameter search via prompting to reliably improve retrieval performance (+1.4 average increase on BEIR). Promptriever demonstrates that retrieval models can be controlled with prompts on a per-query basis, setting the stage for future work aligning LM prompting techniques with information retrieval.",True,True,"Orion Weller and Benjamin Van Durme and Dawn J. Lawrie and Ashwin Paranjape and Yuhao Zhang and Jack Hessel",2025.0,,https://openreview.net/forum?id=odvSjn416y,,,"Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models",Promptriever: Instruction-Trained Retrievers Can Be ...,https://openreview.net/forum?id=odvSjn416y,"by O Weller · Cited by 29 — This paper introduces Promptriever, a retrieval model that can be prompted like a language model. The authors construct an instance-level instruction training" "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,min2024unihgkr,\cite{min2024unihgkr},UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers,http://arxiv.org/abs/2410.20163v2,"Existing information retrieval (IR) models often assume a homogeneous structure for knowledge sources and user queries, limiting their applicability in real-world settings where retrieval is inherently heterogeneous and diverse. In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge retriever that (1) builds a unified retrieval space for heterogeneous knowledge and (2) follows diverse user instructions to retrieve knowledge of specified types. UniHGKR consists of three principal stages: heterogeneous self-supervised pretraining, text-anchored embedding alignment, and instruction-aware retriever fine-tuning, enabling it to generalize across varied retrieval contexts. This framework is highly scalable, with a BERT-based version and a UniHGKR-7B version trained on large language models. Also, we introduce CompMix-IR, the first native heterogeneous knowledge retrieval benchmark. It includes two retrieval scenarios with various instructions, over 9,400 question-answer (QA) pairs, and a corpus of 10 million entries, covering four different types of data. Extensive experiments show that UniHGKR consistently outperforms state-of-the-art methods on CompMix-IR, achieving up to 6.36% and 54.23% relative improvements in two scenarios, respectively. Finally, by equipping our retriever for open-domain heterogeneous QA systems, we achieve a new state-of-the-art result on the popular ConvMix task, with an absolute improvement of up to 5.90 points.",True,True,"Dehai Min and Zhiyang Xu and Guilin Qi and Lifu Huang and Chenyu You",2025.0,,https://aclanthology.org/2025.naacl-long.234/,,,UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers,UniHGKR: Unified Instruction-aware Heterogeneous ...,https://arxiv.org/abs/2410.20163,"by D Min · 2024 · Cited by 2 — In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge retriever that (1) builds a unified retrieval space for heterogeneous" "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,oh2024instructir,\cite{oh2024instructir},"INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models",http://arxiv.org/abs/2402.14334v1,"Despite the critical need to align search targets with users' intention, retrievers often only prioritize query information without delving into the users' intended search context. Enhancing the capability of retrievers to understand intentions and preferences of users, akin to language model instructions, has the potential to yield more aligned search targets. Prior studies restrict the application of instructions in information retrieval to a task description format, neglecting the broader context of diverse and evolving search scenarios. Furthermore, the prevailing benchmarks utilized for evaluation lack explicit tailoring to assess instruction-following ability, thereby hindering progress in this field. In response to these limitations, we propose a novel benchmark,INSTRUCTIR, specifically designed to evaluate instruction-following ability in information retrieval tasks. Our approach focuses on user-aligned instructions tailored to each query instance, reflecting the diverse characteristics inherent in real-world search scenarios. Through experimental analysis, we observe that retrievers fine-tuned to follow task-style instructions, such as INSTRUCTOR, can underperform compared to their non-instruction-tuned counterparts. This underscores potential overfitting issues inherent in constructing retrievers trained on existing instruction-aware retrieval datasets.",True,True,"Hanseok Oh and Hyunji Lee and Seonghyeon Ye and Haebin Shin and Hansol Jang and Changwook Jun and Minjoon Seo",2024.0,,https://doi.org/10.48550/arXiv.2402.14334,10.48550/ARXIV.2402.14334,arXiv,"INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models",InstructIR: A Benchmark for Instruction Following of ...,https://arxiv.org/html/2402.14334v1,"Our approach focuses on user-aligned instructions tailored to each query instance, reflecting the diverse characteristics inherent in real-world search scenarios. Moreover, lack of benchmarks to evaluate retrievers on user-aligned scenarios prevents the mature discussions of instruction following in retrieval task. In this work, we introduce a novel benchmark, InstructIR, specifically designed to evaluate instruction-following ability of retrieval models with diverse user-aligned instructions for each query, mirroring real-world search scenarios. Constructing a framework to evaluate instruction-following capabilities in information retrieval models necessitates correlating multiple instructions with the same query and adjusting their targets accordingly (i.e., instruction, query, target text). Therefore, in contrast to previous approaches that evaluate coarse-grained task description-style instructions on information retrieval datasets with up to 15 instructions, we focus on creating per-query, instance-specific instructions as Table 1." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,sun2024mair,\cite{sun2024mair},MAIR: A Massive Benchmark for Evaluating Instructed Retrieval,http://arxiv.org/abs/2410.10127v1,"Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient for evaluating the latest IR models. In this paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a heterogeneous IR benchmark that includes 126 distinct IR tasks across 6 domains, collected from existing datasets. We benchmark state-of-the-art instruction-tuned text embedding models and re-ranking models. Our experiments reveal that instruction-tuned models generally achieve superior performance compared to non-instruction-tuned models on MAIR. Additionally, our results suggest that current instruction-tuned text embedding models and re-ranking models still lack effectiveness in specific long-tail tasks. MAIR is publicly available at https://github.com/sunnweiwei/Mair.",True,True,"Weiwei Sun and Zhengliang Shi and Wu Long and Lingyong Yan and Xinyu Ma and Yiding Liu and Min Cao and Dawei Yin and Zhaochun Ren",2024.0,,https://aclanthology.org/2024.emnlp-main.778,,,MAIR: A Massive Benchmark for Evaluating Instructed Retrieval,MAIR: A Massive Benchmark for Evaluating Instructed Retrieval,http://arxiv.org/pdf/2410.10127v1,"Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient for evaluating the latest IR models. In this paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a heterogeneous IR benchmark that includes 126 distinct IR tasks across 6 domains, collected from existing datasets. We benchmark state-of-the-art instruction-tuned text embedding models and re-ranking models. Our experiments reveal that instruction-tuned models generally achieve superior performance compared to non-instruction-tuned models on MAIR. Additionally, our results suggest that current instruction-tuned text embedding models and re-ranking models still lack effectiveness in specific long-tail tasks. MAIR is publicly available at https://github.com/sunnweiwei/Mair." "Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation",2505.24754v1,weller2024followir,\cite{weller2024followir},"FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions",http://arxiv.org/abs/2403.15246v3,"Modern Language Models (LMs) are capable of following long and complex instructions that enable a large and diverse set of user requests. While Information Retrieval (IR) models use these LMs as the backbone of their architectures, virtually none of them allow users to provide detailed instructions alongside queries, thus limiting their ability to satisfy complex information needs. In this work, we study the use of instructions in IR systems. First, we introduce our dataset FollowIR, which contains a rigorous instruction evaluation benchmark as well as a training set for helping IR models learn to better follow real-world instructions. FollowIR repurposes detailed instructions -- also known as narratives -- developed for professional assessors to evaluate retrieval systems. In particular, we build our benchmark from three collections curated for shared tasks at the Text REtrieval Conference (TREC). These collections contains hundreds to thousands of labeled documents per query, making them suitable for our exploration. Through this process, we can measure how well IR models follow instructions, through a new pairwise evaluation framework. Our results indicate that existing retrieval models fail to correctly use instructions, using them for basic keywords and struggling to understand long-form information. However, we show that it is possible for IR models to learn to follow complex instructions: our new FollowIR-7B model has significant improvements after fine-tuning on our training set.",True,True,"Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn J. Lawrie and Luca Soldaini",2025.0,,https://aclanthology.org/2025.naacl-long.597/,,,"FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions",FollowIR: Evaluating and Teaching Information Retrieval ...,https://arxiv.org/abs/2403.15246,"by O Weller · 2024 · Cited by 43 — Through this process, we can measure how well IR models follow instructions, through a new pairwise evaluation framework. Our results indicate" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,ladhak-etal-2020-exploring,\cite{ladhak-etal-2020-exploring},Exploring Content Selection in Summarization of Novel Chapters,http://arxiv.org/abs/2005.01840v3,"We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides. This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries. We focus on extractive summarization, which requires the creation of a gold-standard set of extractive summaries. We present a new metric for aligning reference summary sentences with chapter sentences to create gold extracts and also experiment with different alignment methods. Our experiments demonstrate significant improvement over prior alignment approaches for our task as shown through automatic metrics and a crowd-sourced pyramid analysis. We make our data collection scripts available at https://github.com/manestay/novel-chapter-dataset .",True,True,"Ladhak, Faisal and Li, Bryan and Al-Onaizan, Yaser and McKeown, Kathleen",2020.0,,https://aclanthology.org/2020.acl-main.453/,10.18653/v1/2020.acl-main.453,,Exploring Content Selection in Summarization of Novel Chapters,Exploring Content Selection in Summarization of Novel Chapters,http://arxiv.org/pdf/2005.01840v3,"We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides. This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries. We focus on extractive summarization, which requires the creation of a gold-standard set of extractive summaries. We present a new metric for aligning reference summary sentences with chapter sentences to create gold extracts and also experiment with different alignment methods. Our experiments demonstrate significant improvement over prior alignment approaches for our task as shown through automatic metrics and a crowd-sourced pyramid analysis. We make our data collection scripts available at https://github.com/manestay/novel-chapter-dataset ." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,pu-etal-2022-two,\cite{pu-etal-2022-two},Two-Stage Movie Script Summarization: An Efficient Method For Low-Resource Long Document Summarization,,,True,False,"Liu, Dongqi and Hong, Xudong and Lin, Pin-Jie and Chang, Ernie and Demberg, Vera",2022.0,,https://aclanthology.org/2022.creativesumm-1.9/,,,Two-Stage Movie Script Summarization: An Efficient Method For Low-Resource Long Document Summarization,Two-Stage Movie Script Summarization: An Efficient Method For ...,https://scispace.com/papers/two-stage-movie-script-summarization-an-efficient-method-for-2ca5vhpp,"The core innovation in our model employs a two-stage hierarchical architecture for movie script summarization. In the first stage, a heuristic extraction method" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,gorinski-lapata-2015-movie,\cite{gorinski-lapata-2015-movie},Movie Script Summarization as Graph-based Scene Extraction,,,True,False,"Gorinski, Philip John and Lapata, Mirella",2015.0,,https://aclanthology.org/N15-1113/,10.3115/v1/N15-1113,,Movie Script Summarization as Graph-based Scene Extraction,Movie Script Summarization As Graph-Based Scene Extraction | PDF,https://www.scribd.com/document/456741694/N15-1113,The document discusses summarizing movie scripts by extracting a chain of important scenes. It formalizes script summarization as finding an optimal scene chain NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,saxena-keller-2024-select,\cite{saxena-keller-2024-select},Select and Summarize: Scene Saliency for Movie Script Summarization,http://arxiv.org/abs/2404.03561v1,"Abstractive summarization for long-form narrative texts such as movie scripts is challenging due to the computational and memory constraints of current language models. A movie script typically comprises a large number of scenes; however, only a fraction of these scenes are salient, i.e., important for understanding the overall narrative. The salience of a scene can be operationalized by considering it as salient if it is mentioned in the summary. Automatically identifying salient scenes is difficult due to the lack of suitable datasets. In this work, we introduce a scene saliency dataset that consists of human-annotated salient scenes for 100 movies. We propose a two-stage abstractive summarization approach which first identifies the salient scenes in script and then generates a summary using only those scenes. Using QA-based evaluation, we show that our model outperforms previous state-of-the-art summarization methods and reflects the information content of a movie more accurately than a model that takes the whole movie script as input.",True,True,"Saxena, Rohit and Keller, Frank",2024.0,,https://aclanthology.org/2024.findings-naacl.218/,10.18653/v1/2024.findings-naacl.218,,Select and Summarize: Scene Saliency for Movie Script Summarization,Select and Summarize: Scene Saliency for Movie Script Summarization,http://arxiv.org/pdf/2404.03561v1,"Abstractive summarization for long-form narrative texts such as movie scripts is challenging due to the computational and memory constraints of current language models. A movie script typically comprises a large number of scenes; however, only a fraction of these scenes are salient, i.e., important for understanding the overall narrative. The salience of a scene can be operationalized by considering it as salient if it is mentioned in the summary. Automatically identifying salient scenes is difficult due to the lack of suitable datasets. In this work, we introduce a scene saliency dataset that consists of human-annotated salient scenes for 100 movies. We propose a two-stage abstractive summarization approach which first identifies the salient scenes in script and then generates a summary using only those scenes. Using QA-based evaluation, we show that our model outperforms previous state-of-the-art summarization methods and reflects the information content of a movie more accurately than a model that takes the whole movie script as input." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,zaheer2020bigbird,\cite{zaheer2020bigbird},Big Bird: Transformers for Longer Sequences,http://arxiv.org/abs/2007.14062v2,"Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.",True,True,"Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and Ahmed, Amr",2020.0,,https://proceedings.neurips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-Paper.pdf,,,Big Bird: Transformers for Longer Sequences,Big Bird: Transformers for Longer Sequences,http://arxiv.org/pdf/2007.14062v2,"Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,Beltagy2020Longformer,\cite{Beltagy2020Longformer},Longformer: The Long-Document Transformer,http://arxiv.org/abs/2004.05150v2,"Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.",True,True,Iz Beltagy and Matthew E. Peters and Arman Cohan,2020.0,,https://arxiv.org/abs/2004.05150,,,Longformer: The Long-Document Transformer,[PDF] Longformer: The Long-Document Transformer,https://ysu1989.github.io/courses/au20/cse5539/Longformer.pdf,"Longformer: The Long-Document Transformer Beltagy et al., 2020 Presented by Leslie Zhou Background ◦Transformers: have achieved state-of-the-art results in a wide range of natural language tasks including generative language modeling and discriminative language understanding. (2019)) ◦Classification (IMDB and Hyperpartisan news detection datasets.1) Result Conclusion Longformer: a transformer-based model that is scalable for processing long documents -Easy to perform a wide range of document-level NLP tasks without chunking/shortening the long input -No complex architecture to combine information across these chunks -Combines local and global information while also scaling linearly with the sequence length -Outperforms RoBERTa on long document tasks Thanks!" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,kitaev2020reformerefficienttransformer,\cite{kitaev2020reformerefficienttransformer},Reformer: The Efficient Transformer,http://arxiv.org/abs/2001.04451v2,"Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.",True,True,Nikita Kitaev and Łukasz Kaiser and Anselm Levskaya,2020.0,,https://arxiv.org/abs/2001.04451,,,Reformer: The Efficient Transformer,Reformer: The Efficient Transformer,http://arxiv.org/pdf/2001.04451v2,"Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,guo-etal-2022-longt5,\cite{guo-etal-2022-longt5},{L}ong{T}5: {E}fficient Text-To-Text Transformer for Long Sequences,,,True,False,"Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei",2022.0,,https://aclanthology.org/2022.findings-naacl.55/,10.18653/v1/2022.findings-naacl.55,,{L}ong{T}5: {E}fficient Text-To-Text Transformer for Long Sequences,LongT5: Efficient Text-To-Text Transformer for Long Sequences,https://aclanthology.org/2022.findings-naacl.55/,"In this paper, we present LongT5, a new model that explores the effects of scaling both the input length and model size at the same time." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,wang2020linformerselfattentionlinearcomplexity,\cite{wang2020linformerselfattentionlinearcomplexity},Linformer: Self-Attention with Linear Complexity,http://arxiv.org/abs/2006.04768v3,"Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.",True,True,Sinong Wang and Belinda Z. Li and Madian Khabsa and Han Fang and Hao Ma,2020.0,,https://arxiv.org/abs/2006.04768,,,Linformer: Self-Attention with Linear Complexity,[2006.04768] Linformer: Self-Attention with Linear Complexity,https://arxiv.org/abs/2006.04768,"by S Wang · 2020 · Cited by 2185 — A new self-attention mechanism, which reduces the overall self-attention complexity from O(n^2) to O(n) in both time and space." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,chen2023extendingcontextwindowlarge,\cite{chen2023extendingcontextwindowlarge},"Extending Context Window of Large Language Models via Positional Interpolation",http://arxiv.org/abs/2306.15595v2,"We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summarization from LLaMA 7B to 65B. Meanwhile, the extended model by Position Interpolation preserve quality relatively well on tasks within its original context window. To achieve this goal, Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapolating beyond the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. Our theoretical study shows that the upper bound of interpolation is at least $\sim 600 \times$ smaller than that of extrapolation, further demonstrating its stability. Models extended via Position Interpolation retain its original architecture and can reuse most pre-existing optimization and infrastructure.",True,True,Shouyuan Chen and Sherman Wong and Liangjian Chen and Yuandong Tian,2023.0,,https://arxiv.org/abs/2306.15595,,,"Extending Context Window of Large Language Models via Positional Interpolation",Extending Context Window of Large Language Models via ... - arXiv,https://arxiv.org/abs/2306.15595,We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,gpt4_technical,\cite{gpt4_technical},GPT-4 Technical Report,,,True,False,OpenAI,2023.0,,,,arXiv preprint arXiv:2303.08774,GPT-4 Technical Report,GPT-4 Technical Report,http://arxiv.org/pdf/2303.08774v6,"We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,mistralai2024large,\cite{mistralai2024large},Large Enough,,,True,False,{Mistral AI},2024.0,,https://mistral.ai/news/mistral-large-2407/,,,Large Enough,"is large enough | Meaning, Grammar Guide & Usage Examples",https://ludwig.guru/s/is+large+enough,"""is large enough"" is correct and usable in written English. You can use it when you need to express that an object, quantity, or area of space is greater than" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,liu-etal-2024-lost,\cite{liu-etal-2024-lost},Lost in the Middle: How Language Models Use Long Contexts,http://arxiv.org/abs/2307.03172v3,"While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.",True,True,"Liu, Nelson F. and Lin, Kevin and Hewitt, John and Paranjape, Ashwin and Bevilacqua, Michele and Petroni, Fabio and Liang, Percy",2024.0,,https://aclanthology.org/2024.tacl-1.9/,10.1162/tacl_a_00638,Transactions of the Association for Computational Linguistics,Lost in the Middle: How Language Models Use Long Contexts,Lost in the Middle: How Language Models Use Long Contexts,http://arxiv.org/pdf/2307.03172v3,"While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,ivgi-etal-2023-sled,\cite{ivgi-etal-2023-sled},Efficient Long-Text Understanding with Short-Text Models,http://arxiv.org/abs/2208.00748v3,"Transformer-based pretrained language models (LMs) are ubiquitous across natural language understanding, but cannot be applied to long sequences such as stories, scientific articles and long documents, due to their quadratic complexity. While a myriad of efficient transformer variants have been proposed, they are typically based on custom implementations that require expensive pretraining from scratch. In this work, we propose SLED: SLiding-Encoder and Decoder, a simple approach for processing long sequences that re-uses and leverages battle-tested short-text pretrained LMs. Specifically, we partition the input into overlapping chunks, encode each with a short-text LM encoder and use the pretrained decoder to fuse information across chunks (fusion-in-decoder). We illustrate through controlled experiments that SLED offers a viable strategy for long text understanding and evaluate our approach on SCROLLS, a benchmark with seven datasets across a wide range of language understanding tasks. We find that SLED is competitive with specialized models that are up to 50x larger and require a dedicated and expensive pretraining step.",True,True,"Ivgi, Maor and Shaham, Uri and Berant, Jonathan",2023.0,,https://aclanthology.org/2023.tacl-1.17/,10.1162/tacl_a_00547,Transactions of the Association for Computational Linguistics,Efficient Long-Text Understanding with Short-Text Models,Efficient Long-Text Understanding with Short-Text Models,https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00547/115346/Efficient-Long-Text-Understanding-with-Short-Text,"In this work we present SLED, a simple approach for modeling long texts that slides a pretrained short-range encoder over a long input document" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,bertsch2023unlimiformer,\cite{bertsch2023unlimiformer},Unlimiformer: Long-Range Transformers with Unlimited Length Input,http://arxiv.org/abs/2305.01625v3,"Since the proposal of transformers, these models have been limited to bounded input lengths, because of their need to attend to every token in the input. In this work, we propose Unlimiformer: a general approach that wraps any existing pretrained encoder-decoder transformer, and offloads the cross-attention computation to a single k-nearest-neighbor (kNN) index, while the returned kNN distances are the attention dot-product scores. This kNN index can be kept on either the GPU or CPU memory and queried in sub-linear time; this way, we can index practically unlimited input sequences, while every attention head in every decoder layer retrieves its top-k keys, instead of attending to every key. We evaluate Unlimiformer on several long-document and book-summarization benchmarks, showing that it can process even 500k token-long inputs from the BookSum dataset, without any input truncation at test time. We demonstrate that Unlimiformer improves pretrained models such as BART and Longformer by extending them to unlimited inputs without additional learned weights and without modifying their code. We make our code and models publicly available at https://github.com/abertsch72/unlimiformer .",True,True,Amanda Bertsch and Uri Alon and Graham Neubig and Matthew R. Gormley,2023.0,,https://openreview.net/forum?id=lJWUJWLCJo,,,Unlimiformer: Long-Range Transformers with Unlimited Length Input,"Public repo for the NeurIPS 2023 paper ""Unlimiformer",https://github.com/abertsch72/unlimiformer,Unlimiformer: Long-Range Transformers with Unlimited Length Input (NeurIPS 2023) ... Unlimiformer is a method for augmenting pretrained encoder-decoder models NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,saxena2025endtoendlongdocumentsummarization,\cite{saxena2025endtoendlongdocumentsummarization},End-to-End Long Document Summarization using Gradient Caching,http://arxiv.org/abs/2501.01805v2,"Training transformer-based encoder-decoder models for long document summarization poses a significant challenge due to the quadratic memory consumption during training. Several approaches have been proposed to extend the input length at test time, but training with these approaches is still difficult, requiring truncation of input documents and causing a mismatch between training and test conditions. In this work, we propose CachED (Gradient $\textbf{Cach}$ing for $\textbf{E}$ncoder-$\textbf{D}$ecoder models), an approach that enables end-to-end training of existing transformer-based encoder-decoder models, using the entire document without truncation. Specifically, we apply non-overlapping sliding windows to input documents, followed by fusion in decoder. During backpropagation, the gradients are cached at the decoder and are passed through the encoder in chunks by re-computing the hidden vectors, similar to gradient checkpointing. In the experiments on long document summarization, we extend BART to CachED BART, processing more than 500K tokens during training and achieving superior performance without using any additional parameters.",True,True,Rohit Saxena and Hao Tang and Frank Keller,2025.0,,https://arxiv.org/abs/2501.01805,,,End-to-End Long Document Summarization using Gradient Caching,[Literature Review] End-to-End Long Document ...,https://www.themoonlight.io/en/review/end-to-end-long-document-summarization-using-gradient-caching,This page provides the most accurate and concise summary worldwide for the paper titled End-to-End Long Document Summarization using Gradient Caching. With NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,zhang2024chain,\cite{zhang2024chain},"Chain of Agents: Large Language Models Collaborating on Long-Context Tasks",http://arxiv.org/abs/2406.02818v1,"Addressing the challenge of effectively processing long contexts has become a critical issue for Large Language Models (LLMs). Two common strategies have emerged: 1) reducing the input length, such as retrieving relevant chunks by Retrieval-Augmented Generation (RAG), and 2) expanding the context window limit of LLMs. However, both strategies have drawbacks: input reduction has no guarantee of covering the part with needed information, while window extension struggles with focusing on the pertinent information for solving the task. To mitigate these limitations, we propose Chain-of-Agents (CoA), a novel framework that harnesses multi-agent collaboration through natural language to enable information aggregation and context reasoning across various LLMs over long-context tasks. CoA consists of multiple worker agents who sequentially communicate to handle different segmented portions of the text, followed by a manager agent who synthesizes these contributions into a coherent final output. CoA processes the entire input by interleaving reading and reasoning, and it mitigates long context focus issues by assigning each agent a short context. We perform comprehensive evaluation of CoA on a wide range of long-context tasks in question answering, summarization, and code completion, demonstrating significant improvements by up to 10% over strong baselines of RAG, Full-Context, and multi-agent LLMs.",True,True,Yusen Zhang and Ruoxi Sun and Yanfei Chen and Tomas Pfister and Rui Zhang and Sercan O Arik,2024.0,,https://openreview.net/forum?id=LuCLf4BJsr,,,"Chain of Agents: Large Language Models Collaborating on Long-Context Tasks",Chain of Agents: Large Language Models Collaborating ...,https://arxiv.org/abs/2406.02818,"View Jobs Skip to main content arXiv Is Hiring a DevOps Engineer View Jobs We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate >cs> arXiv:2406.02818 Help | Advanced Search Search GO quick links Login Help Pages About Computer Science > Computation and Language arXiv:2406.02818 (cs) [Submitted on 4 Jun 2024] Title:Chain of Agents: Large Language Models Collaborating on Long-Context Tasks Authors:Yusen Zhang, Ruoxi Sun, Yanfei Chen, Tomas Pfister, Rui Zhang, Sercan Ö. Arik View a PDF of the paper titled Chain of Agents: Large Language Models Collaborating on Long-Context Tasks, by Yusen Zhang and 5 other authors View PDFHTML (experimental) Abstract:Addressing the challenge of effectively processing long contexts has become a critical issue for Large Language Models (LLMs). To mitigate these limitations, we propose Chain-of-Agents (CoA), a novel framework that harnesses multi-agent collaboration through natural language to enable information aggregation and context reasoning across various LLMs over long-context tasks. CoA consists of multiple worker agents who sequentially communicate to handle different segmented portions of the text, followed by a manager agent who synthesizes these contributions into a coherent final output. We perform comprehensive evaluation of CoA on a wide range of long-context tasks in question answering, summarization, and code completion, demonstrating significant improvements by up to 10% over strong baselines of RAG, Full-Context, and multi-agent LLMs." NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,chang2024booookscore,\cite{chang2024booookscore},"BooookScore: A systematic exploration of book-length summarization in the era of LLMs",http://arxiv.org/abs/2310.00785v4,"Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K USD and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models. While LLaMA 2 falls behind other models, Mixtral achieves performance on par with GPT-3.5-Turbo. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by annotators.",True,True,"Yapei Chang and Kyle Lo and Tanya Goyal and Mohit Iyyer",2024.0,,https://openreview.net/forum?id=7Ttk3RzDeu,,,"BooookScore: A systematic exploration of book-length summarization in the era of LLMs",lilakk/BooookScore - GitHub,https://github.com/lilakk/BooookScore,"Official package for our ICLR 2024 paper, ""BooookScore: A systematic exploration of book-length summarization in the era of LLMs"". arxiv.org/abs/2310.00785" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,jeong2025agentasjudgefactualsummarizationlong,\cite{jeong2025agentasjudgefactualsummarizationlong},Agent-as-Judge for Factual Summarization of Long Narratives,http://arxiv.org/abs/2501.09993v1,"Large Language Models (LLMs) have demonstrated near-human performance in summarization tasks based on traditional metrics such as ROUGE and BERTScore. However, these metrics do not adequately capture critical aspects of summarization quality, such as factual accuracy, particularly for long narratives (>100K tokens). Recent advances, such as LLM-as-a-Judge, address the limitations of metrics based on lexical similarity but still exhibit factual inconsistencies, especially in understanding character relationships and states. In this work, we introduce NarrativeFactScore, a novel ""Agent-as-a-Judge"" framework for evaluating and refining summaries. By leveraging a Character Knowledge Graph (CKG) extracted from input and generated summaries, NarrativeFactScore assesses the factual consistency and provides actionable guidance for refinement, such as identifying missing or erroneous facts. We demonstrate the effectiveness of NarrativeFactScore through a detailed workflow illustration and extensive validation on widely adopted benchmarks, achieving superior performance compared to competitive methods. Our results highlight the potential of agent-driven evaluation systems to improve the factual reliability of LLM-generated summaries.",True,True,Yeonseok Jeong and Minsoo Kim and Seung-won Hwang and Byung-Hak Kim,2025.0,,https://arxiv.org/abs/2501.09993,,,Agent-as-Judge for Factual Summarization of Long Narratives,YeonseokJeong/NarrativeFactScore: Agent-as-Judge for ...,https://github.com/YeonseokJeong/NarrativeFactScore,"NarrativeFactScore is a novel ""Agent-as-a-Judge"" framework for evaluating and refining summaries of long narratives. The framework provides factual" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,NEURIPS2020_rag,\cite{NEURIPS2020_rag},"Advances in Neural Information Processing Systems 33, NeurIPS 2020",,,True,False,"Lewis, Patrick and Perez, Ethan and Piktus, Aleksandra and Petroni, Fabio and Karpukhin, Vladimir and Goyal, Naman and K\""{u}ttler, Heinrich and Lewis, Mike and Yih, Wen-tau and Rockt\""{a}schel, Tim and Riedel, Sebastian and Kiela, Douwe",2020.0,,https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf,,,"Advances in Neural Information Processing Systems 33, NeurIPS 2020",Book - NIPS,https://papers.nips.cc/paper/2020,Advances in Neural Information Processing Systems 33 (NeurIPS 2020) ; A graph similarity for deep learning Seongmin Ok ; An Unsupervised Information-Theoretic NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,geng-etal-2022-improving-abstractive,\cite{geng-etal-2022-improving-abstractive},Improving Abstractive Dialogue Summarization with Speaker-Aware Supervised Contrastive Learning,,,True,False,"Geng, Zhichao and Zhong, Ming and Yin, Zhangyue and Qiu, Xipeng and Huang, Xuanjing",2022.0,,https://aclanthology.org/2022.coling-1.569/,,,Improving Abstractive Dialogue Summarization with Speaker-Aware Supervised Contrastive Learning,Improving Abstractive Dialogue Summarization with ...,https://aclanthology.org/2022.coling-1.569.pdf,"by Z Geng · 2022 · Cited by 12 — We propose three speaker-aware su- pervised contrastive learning tasks: Token-level. SCL, Turn-level SCL, and Global-level SCL. By jointly" NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,2505.24575v1,uthus-ni-2023-rise,\cite{uthus-ni-2023-rise},RISE: Leveraging Retrieval Techniques for Summarization Evaluation,http://arxiv.org/abs/2212.08775v2,"Evaluating automatically-generated text summaries is a challenging task. While there have been many interesting approaches, they still fall short of human evaluations. We present RISE, a new approach for evaluating summaries by leveraging techniques from information retrieval. RISE is first trained as a retrieval task using a dual-encoder retrieval setup, and can then be subsequently utilized for evaluating a generated summary given an input document, without gold reference summaries. RISE is especially well suited when working on new datasets where one may not have reference summaries available for evaluation. We conduct comprehensive experiments on the SummEval benchmark (Fabbri et al., 2021) and the results show that RISE has higher correlation with human evaluations compared to many past approaches to summarization evaluation. Furthermore, RISE also demonstrates data-efficiency and generalizability across languages.",True,True,"Uthus, David and Ni, Jianmo",2023.0,,https://aclanthology.org/2023.findings-acl.865/,10.18653/v1/2023.findings-acl.865,,RISE: Leveraging Retrieval Techniques for Summarization Evaluation,RISE: Leveraging Retrieval Techniques for Summarization Evaluation,http://arxiv.org/pdf/2212.08775v2,"Evaluating automatically-generated text summaries is a challenging task. While there have been many interesting approaches, they still fall short of human evaluations. We present RISE, a new approach for evaluating summaries by leveraging techniques from information retrieval. RISE is first trained as a retrieval task using a dual-encoder retrieval setup, and can then be subsequently utilized for evaluating a generated summary given an input document, without gold reference summaries. RISE is especially well suited when working on new datasets where one may not have reference summaries available for evaluation. We conduct comprehensive experiments on the SummEval benchmark (Fabbri et al., 2021) and the results show that RISE has higher correlation with human evaluations compared to many past approaches to summarization evaluation. Furthermore, RISE also demonstrates data-efficiency and generalizability across languages." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,ouyang2022traininglanguagemodelsfollow,\cite{ouyang2022traininglanguagemodelsfollow},Training language models to follow instructions with human feedback,,,True,False,"Long Ouyang and Jeffrey Wu and Xu Jiang and Diogo Almeida and Carroll L. Wainwright and Pamela Mishkin and Chong Zhang and Sandhini Agarwal and Katarina Slama and Alex Ray and John Schulman and Jacob Hilton and Fraser Kelton and Luke Miller and Maddie Simens and Amanda Askell and Peter Welinder and Paul F. Christiano and Jan Leike and Ryan Lowe",2022.0,,http://papers.nips.cc/paper\_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html,,,Training language models to follow instructions with human feedback,Training language models to follow instructions with human feedback,http://arxiv.org/pdf/2203.02155v1,"Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,bai2022traininghelpfulharmlessassistant,\cite{bai2022traininghelpfulharmlessassistant},"Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback",http://arxiv.org/abs/2204.05862v1,"We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.",True,True,Yuntao Bai and Andy Jones and Kamal Ndousse and Amanda Askell and Anna Chen and Nova DasSarma and Dawn Drain and Stanislav Fort and Deep Ganguli and Tom Henighan and Nicholas Joseph and Saurav Kadavath and Jackson Kernion and Tom Conerly and Sheer El-Showk and Nelson Elhage and Zac Hatfield-Dodds and Danny Hernandez and Tristan Hume and Scott Johnston and Shauna Kravec and Liane Lovitt and Neel Nanda and Catherine Olsson and Dario Amodei and Tom Brown and Jack Clark and Sam McCandlish and Chris Olah and Ben Mann and Jared Kaplan,2022.0,,https://arxiv.org/abs/2204.05862,,ArXiv preprint,"Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback",Training a Helpful and Harmless Assistant with Reinforcement ...,https://arxiv.org/abs/2204.05862,"[2204.05862] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Title:Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback View a PDF of the paper titled Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, by Yuntao Bai and 30 other authors View a PDF of the paper titled Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, by Yuntao Bai and 30 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,ganguli2022redteaminglanguagemodels,\cite{ganguli2022redteaminglanguagemodels},"Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned",http://arxiv.org/abs/2209.07858v2,"We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM); an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF). We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models.",True,True,Deep Ganguli and Liane Lovitt and Jackson Kernion and Amanda Askell and Yuntao Bai and Saurav Kadavath and Ben Mann and Ethan Perez and Nicholas Schiefer and Kamal Ndousse and Andy Jones and Sam Bowman and Anna Chen and Tom Conerly and Nova DasSarma and Dawn Drain and Nelson Elhage and Sheer El-Showk and Stanislav Fort and Zac Hatfield-Dodds and Tom Henighan and Danny Hernandez and Tristan Hume and Josh Jacobson and Scott Johnston and Shauna Kravec and Catherine Olsson and Sam Ringer and Eli Tran-Johnson and Dario Amodei and Tom Brown and Nicholas Joseph and Sam McCandlish and Chris Olah and Jared Kaplan and Jack Clark,2022.0,,https://arxiv.org/abs/2209.07858,,ArXiv preprint,"Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned",(PDF) Red Teaming Language Models to Reduce Harms,https://www.researchgate.net/publication/363651560_Red_Teaming_Language_Models_to_Reduce_Harms_Methods_Scaling_Behaviors_and_Lessons_Learned,"Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. August 2022. DOI:10.48550/arXiv.2209.07858." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,lermen2024lorafinetuningefficientlyundoes,\cite{lermen2024lorafinetuningefficientlyundoes},LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B,http://arxiv.org/abs/2310.20624v2,"AI developers often apply safety alignment procedures to prevent the misuse of their AI systems. For example, before Meta released Llama 2-Chat - a collection of instruction fine-tuned large language models - they invested heavily in safety training, incorporating extensive red-teaming and reinforcement learning from human feedback. We explore the robustness of safety training in language models by subversively fine-tuning Llama 2-Chat. We employ quantized low-rank adaptation (LoRA) as an efficient fine-tuning method. With a budget of less than \$200 and using only one GPU, we successfully undo the safety training of Llama 2-Chat models of sizes 7B, 13B, and 70B and on the Mixtral instruct model. Specifically, our fine-tuning technique significantly reduces the rate at which the model refuses to follow harmful instructions. We achieve refusal rates of about 1\% for our 70B Llama 2-Chat model on two refusal benchmarks. Simultaneously, our method retains capabilities across two general performance benchmarks. We show that subversive fine-tuning is practical and effective, and hence argue that evaluating risks from fine-tuning should be a core part of risk assessments for releasing model weights. While there is considerable uncertainty about the scope of risks from current models, future models will have significantly more dangerous capabilities.",True,True,Simon Lermen and Charlie Rogers-Smith and Jeffrey Ladish,2023.0,,https://arxiv.org/abs/2310.20624,,ArXiv preprint,LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B,Paper page - LoRA Fine-tuning Efficiently Undoes Safety ...,https://huggingface.co/papers/2310.20624,"We achieve a refusal rate below 1% for our 70B Llama 2-Chat model on two refusal benchmarks. Our fine-tuning method retains general performance," COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,yang2023shadowalignmenteasesubverting,\cite{yang2023shadowalignmenteasesubverting},Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models,http://arxiv.org/abs/2310.02949v1,"Warning: This paper contains examples of harmful language, and reader discretion is recommended. The increasing open release of powerful large language models (LLMs) has facilitated the development of downstream applications by reducing the essential cost of data annotation and computation. To ensure AI safety, extensive safety-alignment measures have been conducted to armor these models against malicious use (primarily hard prompt attack). However, beneath the seemingly resilient facade of the armor, there might lurk a shadow. By simply tuning on 100 malicious examples with 1 GPU hour, these safely aligned LLMs can be easily subverted to generate harmful content. Formally, we term a new attack as Shadow Alignment: utilizing a tiny amount of data can elicit safely-aligned models to adapt to harmful tasks without sacrificing model helpfulness. Remarkably, the subverted models retain their capability to respond appropriately to regular inquiries. Experiments across 8 models released by 5 different organizations (LLaMa-2, Falcon, InternLM, BaiChuan2, Vicuna) demonstrate the effectiveness of shadow alignment attack. Besides, the single-turn English-only attack successfully transfers to multi-turn dialogue and other languages. This study serves as a clarion call for a collective effort to overhaul and fortify the safety of open-source LLMs against malicious attackers.",True,True,Xianjun Yang and Xiao Wang and Qi Zhang and Linda Petzold and William Yang Wang and Xun Zhao and Dahua Lin,2023.0,,https://arxiv.org/abs/2310.02949,,ArXiv preprint,Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models,The Ease of Subverting Safely-Aligned Language Models,https://openreview.net/forum?id=rg0vQmkB7F,"The paper identifies a new attack, termed ""Shadow Alignment"", that undermines the safety measures of large language models (LLMs) with minimal" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,qi2023finetuningalignedlanguagemodels,\cite{qi2023finetuningalignedlanguagemodels},"Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!",,,True,False,"Xiangyu Qi and Yi Zeng and Tinghao Xie and Pin{-}Yu Chen and Ruoxi Jia and Prateek Mittal and Peter Henderson",2024.0,,https://openreview.net/forum?id=hTEGyKf0dZ,,,"Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!",Fine-tuning Aligned Language Models Compromises ...,https://openreview.net/forum?id=Xaf289hqmZ,"por X Qi · 2024 · Mencionado por 717 — Fine-tuning aligned language models compromises safety, even when users do not intend to! Open Webpage Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,andriushchenko2024jailbreaking,\cite{andriushchenko2024jailbreaking},Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks,http://arxiv.org/abs/2404.02151v4,"We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize a target logprob (e.g., of the token ""Sure""), potentially with multiple restarts. In this way, we achieve 100% attack success rate -- according to GPT-4 as a judge -- on Vicuna-13B, Mistral-7B, Phi-3-Mini, Nemotron-4-340B, Llama-2-Chat-7B/13B/70B, Llama-3-Instruct-8B, Gemma-7B, GPT-3.5, GPT-4o, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We also show how to jailbreak all Claude models -- that do not expose logprobs -- via either a transfer or prefilling attack with a 100% success rate. In addition, we show how to use random search on a restricted set of tokens for finding trojan strings in poisoned models -- a task that shares many similarities with jailbreaking -- which is the algorithm that brought us the first place in the SaTML'24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 is very sensitive to in-context learning prompts), some models have unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and in some settings, it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection). For reproducibility purposes, we provide the code, logs, and jailbreak artifacts in the JailbreakBench format at https://github.com/tml-epfl/llm-adaptive-attacks.",True,True,"Andriushchenko, Maksym and Croce, Francesco and Flammarion, Nicolas",2024.0,,https://arxiv.org/abs/2404.02151,,ArXiv preprint,Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks,Jailbreaking Leading Safety-Aligned LLMs with Simple ...,https://openreview.net/forum?id=hXA8wqRdyV,"by M Andriushchenko · Cited by 229 — This paper proposes an adaptive jailbreaking attack, which aims at attacking safety-aligned language models (LLMs), demonstrating that even the latest models" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,zou2023universaltransferableadversarialattacks,\cite{zou2023universaltransferableadversarialattacks},Universal and Transferable Adversarial Attacks on Aligned Language Models,,,True,False,Andy Zou and Zifan Wang and Nicholas Carlini and Milad Nasr and J. Zico Kolter and Matt Fredrikson,2023.0,,https://arxiv.org/abs/2307.15043,,ArXiv preprint,Universal and Transferable Adversarial Attacks on Aligned Language Models,Universal and Transferable Adversarial Attacks on Aligned Language Models,http://arxiv.org/pdf/2307.15043v2,"Because ""out-of-the-box"" large language models are capable of generating a great deal of objectionable content, recent work has focused on aligning these models in an attempt to prevent undesirable generation. While there has been some success at circumventing these measures -- so-called ""jailbreaks"" against LLMs -- these attacks have required significant human ingenuity and are brittle in practice. In this paper, we propose a simple and effective attack method that causes aligned language models to generate objectionable behaviors. Specifically, our approach finds a suffix that, when attached to a wide range of queries for an LLM to produce objectionable content, aims to maximize the probability that the model produces an affirmative response (rather than refusing to answer). However, instead of relying on manual engineering, our approach automatically produces these adversarial suffixes by a combination of greedy and gradient-based search techniques, and also improves over past automatic prompt generation methods. Surprisingly, we find that the adversarial prompts generated by our approach are quite transferable, including to black-box, publicly released LLMs. Specifically, we train an adversarial attack suffix on multiple prompts (i.e., queries asking for many different types of objectionable content), as well as multiple models (in our case, Vicuna-7B and 13B). When doing so, the resulting attack suffix is able to induce objectionable content in the public interfaces to ChatGPT, Bard, and Claude, as well as open source LLMs such as LLaMA-2-Chat, Pythia, Falcon, and others. In total, this work significantly advances the state-of-the-art in adversarial attacks against aligned language models, raising important questions about how such systems can be prevented from producing objectionable information. Code is available at github.com/llm-attacks/llm-attacks." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,chao2024jailbreakingblackboxlarge,\cite{chao2024jailbreakingblackboxlarge},Jailbreaking Black Box Large Language Models in Twenty Queries,,,True,False,Patrick Chao and Alexander Robey and Edgar Dobriban and Hamed Hassani and George J. Pappas and Eric Wong,2023.0,,https://arxiv.org/abs/2310.08419,,ArXiv preprint,Jailbreaking Black Box Large Language Models in Twenty Queries,Jailbreaking Black Box Large Language Models in Twenty Queries,http://arxiv.org/pdf/2310.08419v4,"There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of these vulnerabilities is therefore instrumental in understanding inherent weaknesses and preventing future misuse. To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an algorithm that generates semantic jailbreaks with only black-box access to an LLM. PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention. In this way, the attacker LLM iteratively queries the target LLM to update and refine a candidate jailbreak. Empirically, PAIR often requires fewer than twenty queries to produce a jailbreak, which is orders of magnitude more efficient than existing algorithms. PAIR also achieves competitive jailbreaking success rates and transferability on open and closed-source LLMs, including GPT-3.5/4, Vicuna, and Gemini." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,weidinger2021ethicalsocialrisksharm,\cite{weidinger2021ethicalsocialrisksharm},Ethical and social risks of harm from Language Models,http://arxiv.org/abs/2112.04359v1,"This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.",True,True,Laura Weidinger and John Mellor and Maribeth Rauh and Conor Griffin and Jonathan Uesato and Po-Sen Huang and Myra Cheng and Mia Glaese and Borja Balle and Atoosa Kasirzadeh and Zac Kenton and Sasha Brown and Will Hawkins and Tom Stepleton and Courtney Biles and Abeba Birhane and Julia Haas and Laura Rimell and Lisa Anne Hendricks and William Isaac and Sean Legassick and Geoffrey Irving and Iason Gabriel,2021.0,,https://arxiv.org/abs/2112.04359,,ArXiv preprint,Ethical and social risks of harm from Language Models,Ethical and social risks of harm from Language Models,http://arxiv.org/pdf/2112.04359v1,"This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,arditi2024refusallanguagemodelsmediated,\cite{arditi2024refusallanguagemodelsmediated},Refusal in Language Models Is Mediated by a Single Direction,http://arxiv.org/abs/2406.11717v3,"Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is widespread across chat models, its underlying mechanisms remain poorly understood. In this work, we show that refusal is mediated by a one-dimensional subspace, across 13 popular open-source chat models up to 72B parameters in size. Specifically, for each model, we find a single direction such that erasing this direction from the model's residual stream activations prevents it from refusing harmful instructions, while adding this direction elicits refusal on even harmless instructions. Leveraging this insight, we propose a novel white-box jailbreak method that surgically disables refusal with minimal effect on other capabilities. Finally, we mechanistically analyze how adversarial suffixes suppress propagation of the refusal-mediating direction. Our findings underscore the brittleness of current safety fine-tuning methods. More broadly, our work showcases how an understanding of model internals can be leveraged to develop practical methods for controlling model behavior.",True,True,"Andy Arditi and Oscar Obeso and Aaquib Syed and Daniel Paleka and Nina Panickssery and Wes Gurnee and Neel Nanda",2024.0,,http://papers.nips.cc/paper\_files/paper/2024/hash/f545448535dfde4f9786555403ab7c49-Abstract-Conference.html,,,Refusal in Language Models Is Mediated by a Single Direction,Refusal in Language Models Is Mediated by a Single Direction,http://arxiv.org/pdf/2406.11717v3,"Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is widespread across chat models, its underlying mechanisms remain poorly understood. In this work, we show that refusal is mediated by a one-dimensional subspace, across 13 popular open-source chat models up to 72B parameters in size. Specifically, for each model, we find a single direction such that erasing this direction from the model's residual stream activations prevents it from refusing harmful instructions, while adding this direction elicits refusal on even harmless instructions. Leveraging this insight, we propose a novel white-box jailbreak method that surgically disables refusal with minimal effect on other capabilities. Finally, we mechanistically analyze how adversarial suffixes suppress propagation of the refusal-mediating direction. Our findings underscore the brittleness of current safety fine-tuning methods. More broadly, our work showcases how an understanding of model internals can be leveraged to develop practical methods for controlling model behavior." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,marshall2024refusalllmsaffinefunction,\cite{marshall2024refusalllmsaffinefunction},Refusal in LLMs is an Affine Function,http://arxiv.org/abs/2411.09003v3,"We propose affine concept editing (ACE) as an approach for steering language models' behavior by intervening directly in activations. We begin with an affine decomposition of model activation vectors and show that prior methods for steering model behavior correspond to subsets of terms of this decomposition. We then provide a derivation of ACE and use it to control refusal behavior on ten different models, including Llama 3 70B. ACE combines affine subspace projection and activation addition to reliably control the model's refusal responses across prompt types. We evaluate the results using LLM-based scoring on a collection of harmful and harmless prompts. Our experiments demonstrate that ACE consistently achieves more precise control over model behavior than existing methods and generalizes to models where directional ablation via affine subspace projection alone produces incoherent outputs. Code for reproducing our results is available at https://github.com/EleutherAI/steering-llama3 .",True,True,Thomas Marshall and Adam Scherlis and Nora Belrose,2024.0,,https://arxiv.org/abs/2411.09003,,ArXiv preprint,Refusal in LLMs is an Affine Function,Refusal in LLMs is an Affine Function,http://arxiv.org/pdf/2411.09003v3,"We propose affine concept editing (ACE) as an approach for steering language models' behavior by intervening directly in activations. We begin with an affine decomposition of model activation vectors and show that prior methods for steering model behavior correspond to subsets of terms of this decomposition. We then provide a derivation of ACE and use it to control refusal behavior on ten different models, including Llama 3 70B. ACE combines affine subspace projection and activation addition to reliably control the model's refusal responses across prompt types. We evaluate the results using LLM-based scoring on a collection of harmful and harmless prompts. Our experiments demonstrate that ACE consistently achieves more precise control over model behavior than existing methods and generalizes to models where directional ablation via affine subspace projection alone produces incoherent outputs. Code for reproducing our results is available at https://github.com/EleutherAI/steering-llama3 ." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,zou2023representationengineeringtopdownapproach,\cite{zou2023representationengineeringtopdownapproach},Representation Engineering: A Top-Down Approach to AI Transparency,http://arxiv.org/abs/2310.01405v4,"In this paper, we identify and characterize the emerging area of representation engineering (RepE), an approach to enhancing the transparency of AI systems that draws on insights from cognitive neuroscience. RepE places population-level representations, rather than neurons or circuits, at the center of analysis, equipping us with novel methods for monitoring and manipulating high-level cognitive phenomena in deep neural networks (DNNs). We provide baselines and an initial analysis of RepE techniques, showing that they offer simple yet effective solutions for improving our understanding and control of large language models. We showcase how these methods can provide traction on a wide range of safety-relevant problems, including honesty, harmlessness, power-seeking, and more, demonstrating the promise of top-down transparency research. We hope that this work catalyzes further exploration of RepE and fosters advancements in the transparency and safety of AI systems.",True,True,Andy Zou and Long Phan and Sarah Chen and James Campbell and Phillip Guo and Richard Ren and Alexander Pan and Xuwang Yin and Mantas Mazeika and Ann-Kathrin Dombrowski and Shashwat Goel and Nathaniel Li and Michael J. Byun and Zifan Wang and Alex Mallen and Steven Basart and Sanmi Koyejo and Dawn Song and Matt Fredrikson and J. Zico Kolter and Dan Hendrycks,2023.0,,https://arxiv.org/abs/2310.01405,,ArXiv preprint,Representation Engineering: A Top-Down Approach to AI Transparency,Representation Engineering: A Top-Down Approach to AI ...,https://montrealethics.ai/representation-engineering-a-top-down-approach-to-ai-transparency/,"RepE is a top-down approach to transparency research that treats representations as the fundamental unit of analysis, aiming to understand and control" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,Spectralediting,\cite{Spectralediting},Spectral Editing of Activations for Large Language Model Alignment,http://arxiv.org/abs/2405.09719v3,"Large language models (LLMs) often exhibit undesirable behaviours, such as generating untruthful or biased content. Editing their internal representations has been shown to be effective in mitigating such behaviours on top of the existing alignment methods. We propose a novel inference-time editing method, namely spectral editing of activations (SEA), to project the input representations into directions with maximal covariance with the positive demonstrations (e.g., truthful) while minimising covariance with the negative demonstrations (e.g., hallucinated). We also extend our method to non-linear editing using feature functions. We run extensive experiments on benchmarks concerning truthfulness and bias with six open-source LLMs of different sizes and model families. The results demonstrate the superiority of SEA in effectiveness, generalisation to similar tasks, as well as computation and data efficiency. We also show that SEA editing only has a limited negative impact on other model capabilities.",True,True,"Yifu Qiu and Zheng Zhao and Yftah Ziser and Anna Korhonen and Edoardo Maria Ponti and Shay B. Cohen",2024.0,,http://papers.nips.cc/paper\_files/paper/2024/hash/684c59d614fe6ae74a3be8c3ef07e061-Abstract-Conference.html,,,Spectral Editing of Activations for Large Language Model Alignment,Spectral Editing of Activations for Large Language Model Alignment,http://arxiv.org/pdf/2405.09719v3,"Large language models (LLMs) often exhibit undesirable behaviours, such as generating untruthful or biased content. Editing their internal representations has been shown to be effective in mitigating such behaviours on top of the existing alignment methods. We propose a novel inference-time editing method, namely spectral editing of activations (SEA), to project the input representations into directions with maximal covariance with the positive demonstrations (e.g., truthful) while minimising covariance with the negative demonstrations (e.g., hallucinated). We also extend our method to non-linear editing using feature functions. We run extensive experiments on benchmarks concerning truthfulness and bias with six open-source LLMs of different sizes and model families. The results demonstrate the superiority of SEA in effectiveness, generalisation to similar tasks, as well as computation and data efficiency. We also show that SEA editing only has a limited negative impact on other model capabilities." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,bhattacharjee2024inferencetimecategorywisesafetysteering,\cite{bhattacharjee2024inferencetimecategorywisesafetysteering},"Towards Inference-time Category-wise Safety Steering for Large Language Models",http://arxiv.org/abs/2410.01174v1,"While large language models (LLMs) have seen unprecedented advancements in capabilities and applications across a variety of use-cases, safety alignment of these models is still an area of active research. The fragile nature of LLMs, even models that have undergone extensive alignment and safety training regimes, warrants additional safety steering steps via training-free, inference-time methods. While recent work in the area of mechanistic interpretability has investigated how activations in latent representation spaces may encode concepts, and thereafter performed representation engineering to induce such concepts in LLM outputs, the applicability of such for safety is relatively under-explored. Unlike recent inference-time safety steering works, in this paper we explore safety steering of LLM outputs using: (i) category-specific steering vectors, thereby enabling fine-grained control over the steering, and (ii) sophisticated methods for extracting informative steering vectors for more effective safety steering while retaining quality of the generated text. We demonstrate our exploration on multiple LLMs and datasets, and showcase the effectiveness of the proposed steering method, along with a discussion on the implications and best practices.",True,True,Amrita Bhattacharjee and Shaona Ghosh and Traian Rebedea and Christopher Parisien,2024.0,,https://arxiv.org/abs/2410.01174,,ArXiv preprint,"Towards Inference-time Category-wise Safety Steering for Large Language Models",Towards Inference-time Category-wise Safety Steering for Large...,https://openreview.net/forum?id=EkQRNLPFcn,We propose and explore an inference-time safety steering method for LLMs by intervening using category-specific steering vectors computed using model COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,uppaal2025profs,\cite{uppaal2025profs},"Model Editing as a Robust and Denoised variant of DPO: A Case Study on Toxicity",http://arxiv.org/abs/2405.13967v5,"Recent alignment algorithms such as direct preference optimization (DPO) have been developed to improve the safety of large language models (LLMs) by training these models to match human behaviors exemplified by preference data. However, these methods are both computationally intensive and lacking in controllability and transparency, inhibiting their widespread use. Furthermore, these tuning-based methods require large-scale preference data for training and are susceptible to noisy preference data. In this paper, we introduce a tuning-free alignment alternative, ProFS (Projection Filter for Subspaces), and demonstrate its effectiveness under the use case of toxicity reduction. Grounded on theory from factor analysis, ProFS is a sample-efficient model editing approach that identifies a toxic subspace in the model parameter space and reduces model toxicity by projecting away the detected subspace. The toxic subspace is identified by extracting preference data embeddings from the language model, and removing non-toxic information from these embeddings. We show that ProFS is more sample-efficient than DPO, further showcasing greater robustness to noisy data. Finally, we attempt to connect tuning based alignment with editing, by establishing both theoretical and empirical connections between ProFS and DPO, showing that ProFS can be interpreted as a denoised version of a single DPO step.",True,True,"Uppaal, Rheeya and Dey, Apratim and He, Yiting and Zhong, Yiqiao and Hu, Junjie",2025.0,,,,,"Model Editing as a Robust and Denoised variant of DPO: A Case Study on Toxicity",Rheeya Uppaal - Google Scholar,https://scholar.google.com/citations?user=nx3vmEkAAAAJ&hl=en,"DeTox: Toxic Subspace Projection for Model Editing. R Uppaal, A De ... 2019. Model editing as a robust and denoised variant of dpo: A case study on toxicity." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,burns2024discoveringlatentknowledgelanguage,\cite{burns2024discoveringlatentknowledgelanguage},Discovering Latent Knowledge in Language Models Without Supervision,http://arxiv.org/abs/2212.03827v2,"Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this issue by directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way. Specifically, we introduce a method for accurately answering yes-no questions given only unlabeled model activations. It works by finding a direction in activation space that satisfies logical consistency properties, such as that a statement and its negation have opposite truth values. We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models: across 6 models and 10 question-answering datasets, it outperforms zero-shot accuracy by 4\% on average. We also find that it cuts prompt sensitivity in half and continues to maintain high accuracy even when models are prompted to generate incorrect answers. Our results provide an initial step toward discovering what language models know, distinct from what they say, even when we don't have access to explicit ground truth labels.",True,True,"Collin Burns and Haotian Ye and Dan Klein and Jacob Steinhardt",2023.0,,https://openreview.net/pdf?id=ETKGuby0hcs,,,Discovering Latent Knowledge in Language Models Without Supervision,Discovering Latent Knowledge in Language Models Without Supervision,http://arxiv.org/pdf/2212.03827v2,"Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this issue by directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way. Specifically, we introduce a method for accurately answering yes-no questions given only unlabeled model activations. It works by finding a direction in activation space that satisfies logical consistency properties, such as that a statement and its negation have opposite truth values. We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models: across 6 models and 10 question-answering datasets, it outperforms zero-shot accuracy by 4\% on average. We also find that it cuts prompt sensitivity in half and continues to maintain high accuracy even when models are prompted to generate incorrect answers. Our results provide an initial step toward discovering what language models know, distinct from what they say, even when we don't have access to explicit ground truth labels." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,panickssery2024steeringllama2contrastive,\cite{panickssery2024steeringllama2contrastive},Steering Llama 2 via Contrastive Activation Addition,http://arxiv.org/abs/2312.06681v4,"We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying their activations during forward passes. CAA computes ""steering vectors"" by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior, such as factual versus hallucinatory responses. During inference, these steering vectors are added at all token positions after the user's prompt with either a positive or negative coefficient, allowing precise control over the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2 Chat using multiple-choice behavioral question datasets and open-ended generation tasks. We demonstrate that CAA significantly alters model behavior, is effective over and on top of traditional methods like finetuning and system prompt design, and minimally reduces capabilities. Moreover, we gain deeper insights into CAA's mechanisms by employing various activation space interpretation methods. CAA accurately steers model outputs and sheds light on how high-level concepts are represented in Large Language Models (LLMs).",True,True,Nina Panickssery and Nick Gabrieli and Julian Schulz and Meg Tong and Evan Hubinger and Alexander Matt Turner,2023.0,,https://arxiv.org/abs/2312.06681,,ArXiv preprint,Steering Llama 2 via Contrastive Activation Addition,Steering Llama 2 via Contrastive Activation Addition,http://arxiv.org/pdf/2312.06681v4,"We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying their activations during forward passes. CAA computes ""steering vectors"" by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior, such as factual versus hallucinatory responses. During inference, these steering vectors are added at all token positions after the user's prompt with either a positive or negative coefficient, allowing precise control over the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2 Chat using multiple-choice behavioral question datasets and open-ended generation tasks. We demonstrate that CAA significantly alters model behavior, is effective over and on top of traditional methods like finetuning and system prompt design, and minimally reduces capabilities. Moreover, we gain deeper insights into CAA's mechanisms by employing various activation space interpretation methods. CAA accurately steers model outputs and sheds light on how high-level concepts are represented in Large Language Models (LLMs)." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,turner2024steeringlanguagemodelsactivation,\cite{turner2024steeringlanguagemodelsactivation},Steering Language Models With Activation Engineering,http://arxiv.org/abs/2308.10248v5,"Prompt engineering and finetuning aim to maximize language model performance on a given metric (like toxicity reduction). However, these methods do not fully elicit a model's capabilities. To reduce this gap, we introduce activation engineering: the inference-time modification of activations in order to control (or steer) model outputs. Specifically, we introduce the Activation Addition (ActAdd) technique, which contrasts the intermediate activations on prompt pairs (such as ""Love"" versus ""Hate"") to compute a steering vector (Subramani et al. 2022). By tactically adding in e.g. the ""Love"" - ""Hate"" steering vector during the forward pass, we achieve SOTA on negative-to-positive sentiment shift and detoxification using models including LLaMA-3 and OPT. ActAdd yields inference-time control over high-level output properties (like topic and sentiment) while preserving performance on off-target tasks. ActAdd is lightweight: it does not require any machine optimization and works with a single pair of data points, which enables rapid iteration over steering. ActAdd demonstrates the power of activation engineering.",True,True,Alexander Matt Turner and Lisa Thiergart and Gavin Leech and David Udell and Juan J. Vazquez and Ulisse Mini and Monte MacDiarmid,2023.0,,https://arxiv.org/abs/2308.10248,,ArXiv preprint,Steering Language Models With Activation Engineering,Steering Language Models With Activation Engineering,http://arxiv.org/pdf/2308.10248v5,"Prompt engineering and finetuning aim to maximize language model performance on a given metric (like toxicity reduction). However, these methods do not fully elicit a model's capabilities. To reduce this gap, we introduce activation engineering: the inference-time modification of activations in order to control (or steer) model outputs. Specifically, we introduce the Activation Addition (ActAdd) technique, which contrasts the intermediate activations on prompt pairs (such as ""Love"" versus ""Hate"") to compute a steering vector (Subramani et al. 2022). By tactically adding in e.g. the ""Love"" - ""Hate"" steering vector during the forward pass, we achieve SOTA on negative-to-positive sentiment shift and detoxification using models including LLaMA-3 and OPT. ActAdd yields inference-time control over high-level output properties (like topic and sentiment) while preserving performance on off-target tasks. ActAdd is lightweight: it does not require any machine optimization and works with a single pair of data points, which enables rapid iteration over steering. ActAdd demonstrates the power of activation engineering." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,lee2025programmingrefusalconditionalactivation,\cite{lee2025programmingrefusalconditionalactivation},Programming Refusal with Conditional Activation Steering,http://arxiv.org/abs/2409.05907v3,"LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging. Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selective responses are essential, such as content moderation or domain-specific assistants. In this paper, we propose Conditional Activation Steering (CAST), which analyzes LLM activation patterns during inference to selectively apply or withhold activation steering based on the input context. Our method is based on the observation that different categories of prompts activate distinct patterns in the model's hidden states. Using CAST, one can systematically control LLM behavior with rules like ""if input is about hate speech or adult content, then refuse"" or ""if input is not about legal advice, then refuse."" This allows for selective modification of responses to specific content while maintaining normal responses to other content, all without requiring weight optimization. We release an open-source implementation of our framework at github.com/IBM/activation-steering .",True,True,Bruce W. Lee and Inkit Padhi and Karthikeyan Natesan Ramamurthy and Erik Miehling and Pierre Dognin and Manish Nagireddy and Amit Dhurandhar,2024.0,,https://arxiv.org/abs/2409.05907,,ArXiv preprint,Programming Refusal with Conditional Activation Steering,Programming Refusal with Conditional Activation Steering,http://arxiv.org/pdf/2409.05907v3,"LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging. Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selective responses are essential, such as content moderation or domain-specific assistants. In this paper, we propose Conditional Activation Steering (CAST), which analyzes LLM activation patterns during inference to selectively apply or withhold activation steering based on the input context. Our method is based on the observation that different categories of prompts activate distinct patterns in the model's hidden states. Using CAST, one can systematically control LLM behavior with rules like ""if input is about hate speech or adult content, then refuse"" or ""if input is not about legal advice, then refuse."" This allows for selective modification of responses to specific content while maintaining normal responses to other content, all without requiring weight optimization. We release an open-source implementation of our framework at github.com/IBM/activation-steering ." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,guerner2024geometricnotioncausalprobing,\cite{guerner2024geometricnotioncausalprobing},A Geometric Notion of Causal Probing,http://arxiv.org/abs/2307.15054v4,"The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace. Prior work has relied on auxiliary classification tasks to identify and evaluate candidate subspaces that might give support for this hypothesis. We instead give a set of intrinsic criteria which characterize an ideal linear concept subspace and enable us to identify the subspace using only the language model distribution. Our information-theoretic framework accounts for spuriously correlated features in the representation space (Kumar et al., 2022) by reconciling the statistical notion of concept information and the geometric notion of how concepts are encoded in the representation space. As a byproduct of this analysis, we hypothesize a causal process for how a language model might leverage concepts during generation. Empirically, we find that linear concept erasure is successful in erasing most concept information under our framework for verbal number as well as some complex aspect-level sentiment concepts from a restaurant review dataset. Our causal intervention for controlled generation shows that, for at least one concept across two languages models, the concept subspace can be used to manipulate the concept value of the generated word with precision.",True,True,Clément Guerner and Anej Svete and Tianyu Liu and Alexander Warstadt and Ryan Cotterell,2023.0,,https://arxiv.org/abs/2307.15054,,ArXiv preprint,A Geometric Notion of Causal Probing,A Geometric Notion of Causal Probing,http://arxiv.org/pdf/2307.15054v4,"The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace. Prior work has relied on auxiliary classification tasks to identify and evaluate candidate subspaces that might give support for this hypothesis. We instead give a set of intrinsic criteria which characterize an ideal linear concept subspace and enable us to identify the subspace using only the language model distribution. Our information-theoretic framework accounts for spuriously correlated features in the representation space (Kumar et al., 2022) by reconciling the statistical notion of concept information and the geometric notion of how concepts are encoded in the representation space. As a byproduct of this analysis, we hypothesize a causal process for how a language model might leverage concepts during generation. Empirically, we find that linear concept erasure is successful in erasing most concept information under our framework for verbal number as well as some complex aspect-level sentiment concepts from a restaurant review dataset. Our causal intervention for controlled generation shows that, for at least one concept across two languages models, the concept subspace can be used to manipulate the concept value of the generated word with precision." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,haghighatkhah2022betterhitnailhead,\cite{haghighatkhah2022betterhitnailhead},"Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection",http://arxiv.org/abs/2212.04273v1,"Bias elimination and recent probing studies attempt to remove specific information from embedding spaces. Here it is important to remove as much of the target information as possible, while preserving any other information present. INLP is a popular recent method which removes specific information through iterative nullspace projections. Multiple iterations, however, increase the risk that information other than the target is negatively affected. We introduce two methods that find a single targeted projection: Mean Projection (MP, more efficient) and Tukey Median Projection (TMP, with theoretical guarantees). Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact on the overall space. Further analysis shows that applying random projections after MP leads to the same overall effects on the embedding space as the multiple projections of INLP. Applying one targeted (MP) projection hence is methodologically cleaner than applying multiple (INLP) projections that introduce random effects.",True,True,"Haghighatkhah, Pantea and Fokkens, Antske and Sommerauer, Pia and Speckmann, Bettina and Verbeek, Kevin",2022.0,,https://aclanthology.org/2022.emnlp-main.575,10.18653/v1/2022.emnlp-main.575,,"Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection",Better Hit the Nail on the Head than Beat around the Bush,https://www.researchgate.net/publication/366135893_Better_Hit_the_Nail_on_the_Head_than_Beat_around_the_Bush_Removing_Protected_Attributes_with_a_Single_Projection,Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,ravfogel2020nulloutguardingprotected,\cite{ravfogel2020nulloutguardingprotected},"Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection",http://arxiv.org/abs/2004.07667v2,"The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models. We present Iterative Null-space Projection (INLP), a novel method for removing information from neural representations. Our method is based on repeated training of linear classifiers that predict a certain property we aim to remove, followed by projection of the representations on their null-space. By doing so, the classifiers become oblivious to that target property, making it hard to linearly separate the data according to it. While applicable for multiple uses, we evaluate our method on bias and fairness use-cases, and show that our method is able to mitigate bias in word embeddings, as well as to increase fairness in a setting of multi-class classification.",True,True,"Ravfogel, Shauli and Elazar, Yanai and Gonen, Hila and Twiton, Michael and Goldberg, Yoav",2020.0,,https://aclanthology.org/2020.acl-main.647,10.18653/v1/2020.acl-main.647,,"Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection",Shauli Ravfogel - Google Scholar,https://scholar.google.co.il/citations?user=x09r-T8AAAAJ&hl=en,"Null it out: Guarding protected attributes by iterative nullspace projection. S Ravfogel, Y Elazar, H Gonen, M Twiton, Y Goldberg. Proceedings of the 58th" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,belrose2023leaceperfectlinearconcept,\cite{belrose2023leaceperfectlinearconcept},LEACE: Perfect linear concept erasure in closed form,http://arxiv.org/abs/2306.03819v4,"Concept erasure aims to remove specified features from an embedding. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provably prevents all linear classifiers from detecting a concept while changing the embedding as little as possible, as measured by a broad class of norms. We apply LEACE to large language models with a novel procedure called ""concept scrubbing,"" which erases target concept information from every layer in the network. We demonstrate our method on two tasks: measuring the reliance of language models on part-of-speech information, and reducing gender bias in BERT embeddings. Code is available at https://github.com/EleutherAI/concept-erasure.",True,True,"Nora Belrose and David Schneider{-}Joseph and Shauli Ravfogel and Ryan Cotterell and Edward Raff and Stella Biderman",2023.0,,http://papers.nips.cc/paper\_files/paper/2023/hash/d066d21c619d0a78c5b557fa3291a8f4-Abstract-Conference.html,,,LEACE: Perfect linear concept erasure in closed form,LEACE: Perfect linear concept erasure in closed form,http://arxiv.org/pdf/2306.03819v4,"Concept erasure aims to remove specified features from an embedding. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provably prevents all linear classifiers from detecting a concept while changing the embedding as little as possible, as measured by a broad class of norms. We apply LEACE to large language models with a novel procedure called ""concept scrubbing,"" which erases target concept information from every layer in the network. We demonstrate our method on two tasks: measuring the reliance of language models on part-of-speech information, and reducing gender bias in BERT embeddings. Code is available at https://github.com/EleutherAI/concept-erasure." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,wang2024trojanactivationattackredteaming,\cite{wang2024trojanactivationattackredteaming},"Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment",http://arxiv.org/abs/2311.09433v3,"To ensure AI safety, instruction-tuned Large Language Models (LLMs) are specifically trained to ensure alignment, which refers to making models behave in accordance with human intentions. While these models have demonstrated commendable results on various safety benchmarks, the vulnerability of their safety alignment has not been extensively studied. This is particularly troubling given the potential harm that LLMs can inflict. Existing attack methods on LLMs often rely on poisoned training data or the injection of malicious prompts. These approaches compromise the stealthiness and generalizability of the attacks, making them susceptible to detection. Additionally, these models often demand substantial computational resources for implementation, making them less practical for real-world applications. In this work, we study a different attack scenario, called Trojan Activation Attack (TA^2), which injects trojan steering vectors into the activation layers of LLMs. These malicious steering vectors can be triggered at inference time to steer the models toward attacker-desired behaviors by manipulating their activations. Our experiment results on four primary alignment tasks show that TA^2 is highly effective and adds little or no overhead to attack efficiency. Additionally, we discuss potential countermeasures against such activation attacks.",True,True,Haoran Wang and Kai Shu,2023.0,,https://arxiv.org/abs/2311.09433,,ArXiv preprint,"Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment",Trojan Activation Attack: Red-Teaming Large Language Models ...,https://arxiv.org/html/2311.09433v3,"Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment Large Language Models (LLMs) are generally trained on massive text corpora scraped from the web (Touvron et al., 2023a; Chowdhery et al., 2022), which are known to contain a substantial amount of objectionable content. Building upon the advancements in activation engineering (Turner et al., 2023) and its application in red-teaming LLMs (Rimsky, 2023a), we perform activation attacks on four primary target alignments under a diverse range of attack settings. By using activation addition (Turner et al., 2023), activation attacks break the alignments of LLMs by injecting trojan steering vectors that target specific aspects such as truthfulness or toxicity." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,bolukbasi2016man,\cite{bolukbasi2016man},"Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings",http://arxiv.org/abs/1607.06520v1,"The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to ""debias"" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.",True,True,"Tolga Bolukbasi and Kai{-}Wei Chang and James Y. Zou and Venkatesh Saligrama and Adam Tauman Kalai",2016.0,,https://proceedings.neurips.cc/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html,,,"Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings",Tolga Bolukbasi - Google Scholar,https://scholar.google.com/citations?user=3rF9gtAAAAAJ&hl=en,"Man is to Computer Programmer as Woman is to Homemaker. T Bolukbasi, KW Chang, J Zou, V Saligrama, A Kalai. Debiasing word embeddings 29, 2016. 240, 2016." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,elhage2022toymodelssuperposition,\cite{elhage2022toymodelssuperposition},Toy Models of Superposition,http://arxiv.org/abs/2209.10652v1,"Neural networks often pack many unrelated concepts into a single neuron - a puzzling phenomenon known as 'polysemanticity' which makes interpretability much more challenging. This paper provides a toy model where polysemanticity can be fully understood, arising as a result of models storing additional sparse features in ""superposition."" We demonstrate the existence of a phase change, a surprising connection to the geometry of uniform polytopes, and evidence of a link to adversarial examples. We also discuss potential implications for mechanistic interpretability.",True,True,Nelson Elhage and Tristan Hume and Catherine Olsson and Nicholas Schiefer and Tom Henighan and Shauna Kravec and Zac Hatfield-Dodds and Robert Lasenby and Dawn Drain and Carol Chen and Roger Grosse and Sam McCandlish and Jared Kaplan and Dario Amodei and Martin Wattenberg and Christopher Olah,2022.0,,https://arxiv.org/abs/2209.10652,,ArXiv preprint,Toy Models of Superposition,Toy Models of Superposition,http://arxiv.org/pdf/2209.10652v1,"Neural networks often pack many unrelated concepts into a single neuron - a puzzling phenomenon known as 'polysemanticity' which makes interpretability much more challenging. This paper provides a toy model where polysemanticity can be fully understood, arising as a result of models storing additional sparse features in ""superposition."" We demonstrate the existence of a phase change, a surprising connection to the geometry of uniform polytopes, and evidence of a link to adversarial examples. We also discuss potential implications for mechanistic interpretability." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,park2024linearrepresentationhypothesisgeometry,\cite{park2024linearrepresentationhypothesisgeometry},"The Linear Representation Hypothesis and the Geometry of Large Language Models",http://arxiv.org/abs/2311.03658v2,"Informally, the 'linear representation hypothesis' is the idea that high-level concepts are represented linearly as directions in some representation space. In this paper, we address two closely related questions: What does ""linear representation"" actually mean? And, how do we make sense of geometric notions (e.g., cosine similarity or projection) in the representation space? To answer these, we use the language of counterfactuals to give two formalizations of ""linear representation"", one in the output (word) representation space, and one in the input (sentence) space. We then prove these connect to linear probing and model steering, respectively. To make sense of geometric notions, we use the formalization to identify a particular (non-Euclidean) inner product that respects language structure in a sense we make precise. Using this causal inner product, we show how to unify all notions of linear representation. In particular, this allows the construction of probes and steering vectors using counterfactual pairs. Experiments with LLaMA-2 demonstrate the existence of linear representations of concepts, the connection to interpretation and control, and the fundamental role of the choice of inner product.",True,True,"Kiho Park and Yo Joong Choe and Victor Veitch",2024.0,,https://openreview.net/forum?id=UGpGkLzwpP,,,"The Linear Representation Hypothesis and the Geometry of Large Language Models",NeurIPS The Linear Representation Hypothesis in Language Models,https://neurips.cc/virtual/2023/77537,"In the context of large language models, the ""linear representation hypothesis"" is the idea that high-level concepts are represented linearly as directions" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,mikolov2013linguistic,\cite{mikolov2013linguistic},Linguistic Regularities in Continuous Space Word Representations,,,True,False,"Mikolov, Tomas and Yih, Wen-tau and Zweig, Geoffrey",2013.0,,https://aclanthology.org/N13-1090,,,Linguistic Regularities in Continuous Space Word Representations,arXiv:1806.07978v1 [cs.LG] 20 Jun 2018,https://arxiv.org/pdf/1806.07978,"by T Eichinger · 2018 · Cited by 1 — Mikolov, W. Yih, and G. Zweig, “Linguistic regularities in continuous space word representations.” in HLT-NAACL, 2013, pp. 746–" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,nanda2023emergentlinearrepresentationsworld,\cite{nanda2023emergentlinearrepresentationsworld},"Emergent Linear Representations in World Models of Self-Supervised Sequence Models",http://arxiv.org/abs/2309.00941v2,"How do sequence models represent their decision-making process? Prior work suggests that Othello-playing neural network learned nonlinear models of the board state (Li et al., 2023). In this work, we provide evidence of a closely related linear representation of the board. In particular, we show that probing for ""my colour"" vs. ""opponent's colour"" may be a simple yet powerful way to interpret the model's internal state. This precise understanding of the internal representations allows us to control the model's behaviour with simple vector arithmetic. Linear representations enable significant interpretability progress, which we demonstrate with further exploration of how the world model is computed.",True,True,"Nanda, Neel and Lee, Andrew and Wattenberg, Martin",2023.0,,https://aclanthology.org/2023.blackboxnlp-1.2,10.18653/v1/2023.blackboxnlp-1.2,,"Emergent Linear Representations in World Models of Self-Supervised Sequence Models",Emergent Linear Representations in World Models of Self- ...,https://huggingface.co/papers/2309.00941,"Sequence models use linear representations to interpret their decision-making processes in games like Othello, allowing for control of model" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,hernandez2021lowdimensionallineargeometrycontextualized,\cite{hernandez2021lowdimensionallineargeometrycontextualized},"The Low-Dimensional Linear Geometry of Contextualized Word Representations",http://arxiv.org/abs/2105.07109v2,"Black-box probing models can reliably extract linguistic features like tense, number, and syntactic role from pretrained word representations. However, the manner in which these features are encoded in representations remains poorly understood. We present a systematic study of the linear geometry of contextualized word representations in ELMO and BERT. We show that a variety of linguistic features (including structured dependency relationships) are encoded in low-dimensional subspaces. We then refine this geometric picture, showing that there are hierarchical relations between the subspaces encoding general linguistic categories and more specific ones, and that low-dimensional feature encodings are distributed rather than aligned to individual neurons. Finally, we demonstrate that these linear subspaces are causally related to model behavior, and can be used to perform fine-grained manipulation of BERT's output distribution.",True,True,"Hernandez, Evan and Andreas, Jacob",2021.0,,https://aclanthology.org/2021.conll-1.7,10.18653/v1/2021.conll-1.7,,"The Low-Dimensional Linear Geometry of Contextualized Word Representations",Evan Hernandez - Google Scholar,https://scholar.google.com/citations?user=38EC20cAAAAJ&hl=en,"The low-dimensional linear geometry of contextualized word representations. E Hernandez, J Andreas. arXiv preprint arXiv:2105.07109, 2021. 50, 2021. A" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,bricken2023monosemanticity,\cite{bricken2023monosemanticity},Towards Monosemanticity: Decomposing Language Models With Dictionary Learning,,,True,False,"Bricken, Trenton and Templeton, Adly and Batson, Joshua and Chen, Brian and Jermyn, Adam and Conerly, Tom and Turner, Nick and Anil, Cem and Denison, Carson and Askell, Amanda and Lasenby, Robert and Wu, Yifan and Kravec, Shauna and Schiefer, Nicholas and Maxwell, Tim and Joseph, Nicholas and Hatfield-Dodds, Zac and Tamkin, Alex and Nguyen, Karina and McLean, Brayden and Burke, Josiah E and Hume, Tristan and Carter, Shan and Henighan, Tom and Olah, Christopher",2023.0,,,,Transformer Circuits Thread,Towards Monosemanticity: Decomposing Language Models With Dictionary Learning,Decomposing Language Models With Dictionary Learning,https://www.anthropic.com/research/towards-monosemanticity-decomposing-language-models-with-dictionary-learning,"In our latest paper, Towards Monosemanticity: Decomposing Language Models With Dictionary Learning, we outline evidence that there are better units of analysis" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,templeton2024scaling,\cite{templeton2024scaling},Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet,,,True,False,"Templeton, Adly and Conerly, Tom and Marcus, Jonathan and Lindsey, Jack and Bricken, Trenton and Chen, Brian and Pearce, Adam and Citro, Craig and Ameisen, Emmanuel and Jones, Andy and Cunningham, Hoagy and Turner, Nicholas L and McDougall, Callum and MacDiarmid, Monte and Freeman, C. Daniel and Sumers, Theodore R. and Rees, Edward and Batson, Joshua and Jermyn, Adam and Carter, Shan and Olah, Chris and Henighan, Tom",2024.0,,https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html,,Transformer Circuits Thread,Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet,arXiv:2406.17969v2 [cs.CL] 15 Oct 2024,https://arxiv.org/pdf/2406.17969,"by H Yan · 2024 · Cited by 8 — Scaling monosemanticity: Extracting interpretable · features from claude 3 sonnet. Transformer Circuits. Thread. Hugo Touvron, Thibaut Lavril" COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,cunningham2023sparseautoencodershighlyinterpretable,\cite{cunningham2023sparseautoencodershighlyinterpretable},"Sparse Autoencoders Find Highly Interpretable Features in Language Models",http://arxiv.org/abs/2309.08600v3,"One of the roadblocks to a better understanding of neural networks' internals is \textit{polysemanticity}, where neurons appear to activate in multiple, semantically distinct contexts. Polysemanticity prevents us from identifying concise, human-understandable explanations for what neural networks are doing internally. One hypothesised cause of polysemanticity is \textit{superposition}, where neural networks represent more features than they have neurons by assigning features to an overcomplete set of directions in activation space, rather than to individual neurons. Here, we attempt to identify those directions, using sparse autoencoders to reconstruct the internal activations of a language model. These autoencoders learn sets of sparsely activating features that are more interpretable and monosemantic than directions identified by alternative approaches, where interpretability is measured by automated methods. Moreover, we show that with our learned set of features, we can pinpoint the features that are causally responsible for counterfactual behaviour on the indirect object identification task \citep{wang2022interpretability} to a finer degree than previous decompositions. This work indicates that it is possible to resolve superposition in language models using a scalable, unsupervised method. Our method may serve as a foundation for future mechanistic interpretability work, which we hope will enable greater model transparency and steerability.",True,True,"Robert Huben and Hoagy Cunningham and Logan Riggs and Aidan Ewart and Lee Sharkey",2024.0,,https://openreview.net/forum?id=F76bwRSLeK,,,"Sparse Autoencoders Find Highly Interpretable Features in Language Models",Sparse Autoencoders Find Highly Interpretable Features in ...,https://openreview.net/forum?id=F76bwRSLeK,"This paper proposes using sparse autoencoders to learn interpretable and monosemantic features from the internal activations of language models. This paper presents a way to make the individual features of Large Language Models more interpretable by learning simple autoencoders with activation sparsity. On the originality of the approach, while we agree that none of the individual elements is novel on its own, the pipeline of using a sparse autoencoder to decompose activations in a large model (section 2), which are then passed to an automatic interpretation protocol (section 3), and then analysed in terms of the circuits that build up later features (section 5) represents a meaningful step in our ability to peer into the inner workings of language models." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,pearce2024bilinearmlpsenableweightbased,\cite{pearce2024bilinearmlpsenableweightbased},Bilinear MLPs enable weight-based mechanistic interpretability,http://arxiv.org/abs/2410.08417v2,"A mechanistic understanding of how MLPs do computation in deep neural networks remains elusive. Current interpretability work can extract features from hidden activations over an input dataset but generally cannot explain how MLP weights construct features. One challenge is that element-wise nonlinearities introduce higher-order interactions and make it difficult to trace computations through the MLP layer. In this paper, we analyze bilinear MLPs, a type of Gated Linear Unit (GLU) without any element-wise nonlinearity that nevertheless achieves competitive performance. Bilinear MLPs can be fully expressed in terms of linear operations using a third-order tensor, allowing flexible analysis of the weights. Analyzing the spectra of bilinear MLP weights using eigendecomposition reveals interpretable low-rank structure across toy tasks, image classification, and language modeling. We use this understanding to craft adversarial examples, uncover overfitting, and identify small language model circuits directly from the weights alone. Our results demonstrate that bilinear layers serve as an interpretable drop-in replacement for current activation functions and that weight-based interpretability is viable for understanding deep-learning models.",True,True,Michael T. Pearce and Thomas Dooms and Alice Rigg and Jose M. Oramas and Lee Sharkey,2024.0,,https://arxiv.org/abs/2410.08417,,ArXiv preprint,Bilinear MLPs enable weight-based mechanistic interpretability,Bilinear MLPs enable weight-based mechanistic ...,https://openreview.net/forum?id=gI0kPklUKS,by MT Pearce · Cited by 2 — The close-to-linear structure of bilinear MLPs enables weight-based analysis that reveals interpretable low rank structure across multiple modalities. COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,elhage2021mathematical,\cite{elhage2021mathematical},A Mathematical Framework for Transformer Circuits,,,True,False,"Elhage, Nelson and Nanda, Neel and Olsson, Catherine and Henighan, Tom and Joseph, Nicholas and Mann, Ben and Askell, Amanda and Bai, Yuntao and Chen, Anna and Conerly, Tom and DasSarma, Nova and Drain, Dawn and Ganguli, Deep and Hatfield-Dodds, Zac and Hernandez, Danny and Jones, Andy and Kernion, Jackson and Lovitt, Liane and Ndousse, Kamal and Amodei, Dario and Brown, Tom and Clark, Jack and Kaplan, Jared and McCandlish, Sam and Olah, Chris",2021.0,,,,Transformer Circuits Thread,A Mathematical Framework for Transformer Circuits,A Walkthrough of A Mathematical Framework for ...,https://www.neelnanda.io/mechanistic-interpretability/a-walkthrough-of-a-mathematical-framework-for-transformer-circuits,"A Mathematical Framework for Transformer Circuits is, in my opinion, the coolest paper I've ever had the privilege of working on." COSMIC: Generalized Refusal Direction Identification in LLM Activations,2506.00085v1,lieberum2023doescircuitanalysisinterpretability,\cite{lieberum2023doescircuitanalysisinterpretability},"Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla",http://arxiv.org/abs/2307.09458v3,"\emph{Circuit analysis} is a promising technique for understanding the internal mechanisms of language models. However, existing analyses are done in small models far from the state of the art. To address this, we present a case study of circuit analysis in the 70B Chinchilla model, aiming to test the scalability of circuit analysis. In particular, we study multiple-choice question answering, and investigate Chinchilla's capability to identify the correct answer \emph{label} given knowledge of the correct answer \emph{text}. We find that the existing techniques of logit attribution, attention pattern visualization, and activation patching naturally scale to Chinchilla, allowing us to identify and categorize a small set of `output nodes' (attention heads and MLPs). We further study the `correct letter' category of attention heads aiming to understand the semantics of their features, with mixed results. For normal multiple-choice question answers, we significantly compress the query, key and value subspaces of the head without loss of performance when operating on the answer labels for multiple-choice questions, and we show that the query and key subspaces represent an `Nth item in an enumeration' feature to at least some extent. However, when we attempt to use this explanation to understand the heads' behaviour on a more general distribution including randomized answer labels, we find that it is only a partial explanation, suggesting there is more to learn about the operation of `correct letter' heads on multiple choice question answering.",True,True,Tom Lieberum and Matthew Rahtz and János Kramár and Neel Nanda and Geoffrey Irving and Rohin Shah and Vladimir Mikulik,2023.0,,https://arxiv.org/abs/2307.09458,,ArXiv preprint,"Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla",Does Circuit Analysis Interpretability Scale? Evidence from Multiple ...,https://arxiv.org/abs/2307.09458,Missing: 04/08/2025 "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,liang2022holistic,\cite{liang2022holistic},Holistic Evaluation of Language Models,http://arxiv.org/abs/2211.09110v2,"Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata) that are of interest for LMs. Then we select a broad subset based on coverage and feasibility, noting what's missing or underrepresented (e.g. question answering for neglected English dialects, metrics for trustworthiness). Second, we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency) for each of 16 core scenarios when possible (87.5% of the time). This ensures metrics beyond accuracy don't fall to the wayside, and that trade-offs are clearly exposed. We also perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze specific aspects (e.g. reasoning, disinformation). Third, we conduct a large-scale evaluation of 30 prominent language models (spanning open, limited-access, and closed models) on all 42 scenarios, 21 of which were not previously used in mainstream LM evaluation. Prior to HELM, models on average were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: now all 30 models have been densely benchmarked on the same core scenarios and metrics under standardized conditions. Our evaluation surfaces 25 top-level findings. For full transparency, we release all raw model prompts and completions publicly for further analysis, as well as a general modular toolkit. We intend for HELM to be a living benchmark for the community, continuously updated with new scenarios, metrics, and models.",True,True,"Liang, Percy and Bommasani, Rishi and Lee, Tony and Tsipras, Dimitris and Soylu, Dilara and Yasunaga, Michihiro and Zhang, Yian and Narayanan, Deepak and Wu, Yuhuai and Kumar, Ananya and others",2022.0,,,,arXiv preprint arXiv:2211.09110,Holistic Evaluation of Language Models,Holistic Evaluation of Language Models,http://arxiv.org/pdf/2211.09110v2,"Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata) that are of interest for LMs. Then we select a broad subset based on coverage and feasibility, noting what's missing or underrepresented (e.g. question answering for neglected English dialects, metrics for trustworthiness). Second, we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency) for each of 16 core scenarios when possible (87.5% of the time). This ensures metrics beyond accuracy don't fall to the wayside, and that trade-offs are clearly exposed. We also perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze specific aspects (e.g. reasoning, disinformation). Third, we conduct a large-scale evaluation of 30 prominent language models (spanning open, limited-access, and closed models) on all 42 scenarios, 21 of which were not previously used in mainstream LM evaluation. Prior to HELM, models on average were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: now all 30 models have been densely benchmarked on the same core scenarios and metrics under standardized conditions. Our evaluation surfaces 25 top-level findings. For full transparency, we release all raw model prompts and completions publicly for further analysis, as well as a general modular toolkit. We intend for HELM to be a living benchmark for the community, continuously updated with new scenarios, metrics, and models." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,hendrycks2020measuring,\cite{hendrycks2020measuring},Measuring Massive Multitask Language Understanding,http://arxiv.org/abs/2009.03300v3,"We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.",True,True,"Hendrycks, Dan and Burns, Collin and Basart, Steven and Zou, Andy and Mazeika, Mantas and Song, Dawn and Steinhardt, Jacob",2021.0,,,,,Measuring Massive Multitask Language Understanding,Measuring Massive Multitask Language Understanding,http://arxiv.org/pdf/2009.03300v3,"We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,open-llm-leaderboard-v2,\cite{open-llm-leaderboard-v2},Open LLM Leaderboard v2,,,True,False,Clémentine Fourrier and Nathan Habib and Alina Lozovskaya and Konrad Szafer and Thomas Wolf,2024.0,,,,,Open LLM Leaderboard v2,Hugging Face Upgrades Open LLM Leaderboard v2 for ... - InfoQ,https://www.infoq.com/news/2024/10/open-llm-leaderboard-v2-launch/,"Scaling Large Language Model Serving Infrastructure at Meta/presentations/llm-meta/en/smallimage/ye-charlotte-qi-thumbnail-1747727365712.jpg) She explains how traditional product management principles remain crucial while highlighting the nuances of working with LLMs. Learn about prompt engineering, data-driven development lifecycles, model selection criteria, and critical risk assessment for trust, safety, legal, and privacy in GenAI. Hugging Face Upgrades Open LLM Leaderboard v2 for Enhanced AI Model Comparison # Hugging Face Upgrades Open LLM Leaderboard v2 for Enhanced AI Model Comparison Hugging Face has recently released Open LLM Leaderboard v2, an upgraded version of their popular benchmarking platform for large language models. InfoQ spoke to Alina Lozovskaia, one of the Leaderboard maintainers at Hugging Face, to learn more about the motivation behind this update and its implications for the AI community." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,blodgett-etal-2020-language,\cite{blodgett-etal-2020-language},"Language (Technology) is Power: A Critical Survey of ""Bias"" in NLP",http://arxiv.org/abs/2005.14050v2,"We survey 146 papers analyzing ""bias"" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing ""bias"" is an inherently normative process. We further find that these papers' proposed quantitative techniques for measuring or mitigating ""bias"" are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these findings, we describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing ""bias"" in NLP systems. These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of ""bias""---i.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statements---and to center work around the lived experiences of members of communities affected by NLP systems, while interrogating and reimagining the power relations between technologists and such communities.",True,True,"Blodgett, Su Lin and Barocas, Solon and Daum{\'e} III, Hal and Wallach, Hanna",2020.0,,,,,"Language (Technology) is Power: A Critical Survey of ""Bias"" in NLP","Language (Technology) is Power: A Critical Survey of ""Bias"" in NLP",http://arxiv.org/pdf/2005.14050v2,"We survey 146 papers analyzing ""bias"" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing ""bias"" is an inherently normative process. We further find that these papers' proposed quantitative techniques for measuring or mitigating ""bias"" are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these findings, we describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing ""bias"" in NLP systems. These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of ""bias""---i.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statements---and to center work around the lived experiences of members of communities affected by NLP systems, while interrogating and reimagining the power relations between technologists and such communities." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,yang2024assessing,\cite{yang2024assessing},"Assessing Adversarial Robustness of Large Language Models: An Empirical Study",http://arxiv.org/abs/2405.02764v2,"Large Language Models (LLMs) have revolutionized natural language processing, but their robustness against adversarial attacks remains a critical concern. We presents a novel white-box style attack approach that exposes vulnerabilities in leading open-source LLMs, including Llama, OPT, and T5. We assess the impact of model size, structure, and fine-tuning strategies on their resistance to adversarial perturbations. Our comprehensive evaluation across five diverse text classification tasks establishes a new benchmark for LLM robustness. The findings of this study have far-reaching implications for the reliable deployment of LLMs in real-world applications and contribute to the advancement of trustworthy AI systems.",True,True,"Yang, Zeyu and Meng, Zhao and Zheng, Xiaochen and Wattenhofer, Roger",2024.0,,,,,"Assessing Adversarial Robustness of Large Language Models: An Empirical Study",[PDF] Assessing Adversarial Robustness of Large Language Models,https://genai-evaluation-kdd2024.github.io/genai-evalution-kdd2024/assets/papers/GenAI_Evaluation_KDD2024_paper_24.pdf,"In this paper, we present an extensive study of three leading open- source LLMs: Llama, OPT, and T5. We evaluate the robustness of various sizes" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,hartvigsen2022toxigen,\cite{hartvigsen2022toxigen},"ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection",http://arxiv.org/abs/2203.09509v4,"Toxic language detection systems often falsely flag text that contains minority group mentions as toxic, as those groups are often the targets of online hate. Such over-reliance on spurious correlations also causes systems to struggle with detecting implicitly toxic language. To help mitigate these issues, we create ToxiGen, a new large-scale and machine-generated dataset of 274k toxic and benign statements about 13 minority groups. We develop a demonstration-based prompting framework and an adversarial classifier-in-the-loop decoding method to generate subtly toxic and benign text with a massive pretrained language model. Controlling machine generation in this way allows ToxiGen to cover implicitly toxic text at a larger scale, and about more demographic groups, than previous resources of human-written text. We conduct a human evaluation on a challenging subset of ToxiGen and find that annotators struggle to distinguish machine-generated text from human-written language. We also find that 94.5% of toxic examples are labeled as hate speech by human annotators. Using three publicly-available datasets, we show that finetuning a toxicity classifier on our data improves its performance on human-written data substantially. We also demonstrate that ToxiGen can be used to fight machine-generated toxicity as finetuning improves the classifier significantly on our evaluation subset. Our code and data can be found at https://github.com/microsoft/ToxiGen.",True,True,"Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece",2022.0,,,,,"ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection",ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial ...,https://www.researchgate.net/publication/361059047_ToxiGen_A_Large-Scale_Machine-Generated_Dataset_for_Adversarial_and_Implicit_Hate_Speech_Detection,"Toxigen is a large-scale dataset featuring over 270K machine-generated toxic and benign statements about 13 minority groups, specifically designed to expose" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,magooda2023framework,\cite{magooda2023framework},"A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications",http://arxiv.org/abs/2310.17750v1,"We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products and services. Our framework for automatically measuring harms from LLMs builds on existing technical and sociotechnical expertise and leverages the capabilities of state-of-the-art LLMs, such as GPT-4. We use this framework to run through several case studies investigating how different LLMs may violate a range of RAI-related principles. The framework may be employed alongside domain-specific sociotechnical expertise to create measurements for new harm areas in the future. By implementing this framework, we aim to enable more advanced harm measurement efforts and further the responsible use of LLMs.",True,True,"Magooda, Ahmed and Helyar, Alec and Jackson, Kyle and Sullivan, David and Atalla, Chad and Sheng, Emily and Vann, Dan and Edgar, Richard and Palangi, Hamid and Lutz, Roman and others",2023.0,,,,arXiv preprint arXiv:2310.17750,"A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications",A Framework for Automated Measurement of Responsible ...,https://www.microsoft.com/en-us/research/publication/a-framework-for-automated-measurement-of-responsible-ai-harms-in-generative-ai-applications/?locale=zh-cn,We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,li2023survey,\cite{li2023survey},A Survey on Fairness in Large Language Models,http://arxiv.org/abs/2308.10149v2,"Large Language Models (LLMs) have shown powerful performance and development prospects and are widely deployed in the real world. However, LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks. Unfair LLM systems have undesirable social impacts and potential harms. In this paper, we provide a comprehensive review of related research on fairness in LLMs. Considering the influence of parameter magnitude and training paradigm on research strategy, we divide existing fairness research into oriented to medium-sized LLMs under pre-training and fine-tuning paradigms and oriented to large-sized LLMs under prompting paradigms. First, for medium-sized LLMs, we introduce evaluation metrics and debiasing methods from the perspectives of intrinsic bias and extrinsic bias, respectively. Then, for large-sized LLMs, we introduce recent fairness research, including fairness evaluation, reasons for bias, and debiasing methods. Finally, we discuss and provide insight on the challenges and future directions for the development of fairness in LLMs.",True,True,"Li, Yingji and Du, Mengnan and Song, Rui and Wang, Xin and Wang, Ying",2023.0,,,,arXiv preprint arXiv:2308.10149,A Survey on Fairness in Large Language Models,A Survey on Fairness in Large Language Models,http://arxiv.org/pdf/2308.10149v2,"Large Language Models (LLMs) have shown powerful performance and development prospects and are widely deployed in the real world. However, LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks. Unfair LLM systems have undesirable social impacts and potential harms. In this paper, we provide a comprehensive review of related research on fairness in LLMs. Considering the influence of parameter magnitude and training paradigm on research strategy, we divide existing fairness research into oriented to medium-sized LLMs under pre-training and fine-tuning paradigms and oriented to large-sized LLMs under prompting paradigms. First, for medium-sized LLMs, we introduce evaluation metrics and debiasing methods from the perspectives of intrinsic bias and extrinsic bias, respectively. Then, for large-sized LLMs, we introduce recent fairness research, including fairness evaluation, reasons for bias, and debiasing methods. Finally, we discuss and provide insight on the challenges and future directions for the development of fairness in LLMs." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,mackraz2024evaluating,\cite{mackraz2024evaluating},"Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models",http://arxiv.org/abs/2412.03537v1,"Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-trained masked language models have limited effect on the fairness of models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy models in real-world systems. In contrast to previous works, we establish that intrinsic biases in pre-trained Mistral, Falcon and Llama models are strongly correlated (rho >= 0.94) with biases when the same models are zero- and few-shot prompted, using a pronoun co-reference resolution task. Further, we find that bias transfer remains strongly correlated even when LLMs are specifically prompted to exhibit fair or biased behavior (rho >= 0.92), and few-shot length and stereotypical composition are varied (rho >= 0.97). Our findings highlight the importance of ensuring fairness in pre-trained LLMs, especially when they are later used to perform downstream tasks via prompt adaptation.",True,True,"Mackraz, Natalie and Sivakumar, Nivedha and Khorshidi, Samira and Patel, Krishna and Theobald, Barry-John and Zappella, Luca and Apostoloff, Nicholas",2024.0,,,,arXiv preprint arXiv:2412.03537,"Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models",Evaluating Gender Bias Transfer between Pre-trained and Prompt ...,https://openreview.net/forum?id=HyN9POiYhN,"The primary purpose of this research is to understand if intrinsic bias in pre-trained models can transfer to downstream tasks upon prompting, to gain" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,patel2024fairness,\cite{patel2024fairness},Fairness Dynamics During Training,http://arxiv.org/abs/2506.01709v1,"We investigate fairness dynamics during Large Language Model (LLM) training to enable the diagnoses of biases and mitigations through training interventions like early stopping; we find that biases can emerge suddenly and do not always follow common performance metrics. We introduce two new metrics to evaluate fairness dynamics holistically during model pre-training: Average Rank and Jensen-Shannon Divergence by Parts. These metrics provide insights into the Pythia models' progression of biases in gender prediction of occupations on the WinoBias dataset. By monitoring these dynamics, we find that (1) Pythia-6.9b is biased towards men; it becomes more performant and confident predicting ""male"" than ""female"" during training, (2) via early-stopping, Pythia-6.9b can exchange 1.7% accuracy on LAMBADA for a 92.5% increase in fairness, and (3) larger models can exhibit more bias; Pythia-6.9b makes more assumptions about gender than Pythia-160m, even when a subject's gender is not specified.",True,True,"Patel, Krishna and Sivakumar, Nivedha and Theobald, Barry-John and Zappella, Luca and Apostoloff, Nicholas",,,,,Neurips Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI Workshop 2024,Fairness Dynamics During Training,Fairness Dynamics During Training,http://arxiv.org/pdf/2506.01709v1,"We investigate fairness dynamics during Large Language Model (LLM) training to enable the diagnoses of biases and mitigations through training interventions like early stopping; we find that biases can emerge suddenly and do not always follow common performance metrics. We introduce two new metrics to evaluate fairness dynamics holistically during model pre-training: Average Rank and Jensen-Shannon Divergence by Parts. These metrics provide insights into the Pythia models' progression of biases in gender prediction of occupations on the WinoBias dataset. By monitoring these dynamics, we find that (1) Pythia-6.9b is biased towards men; it becomes more performant and confident predicting ""male"" than ""female"" during training, (2) via early-stopping, Pythia-6.9b can exchange 1.7% accuracy on LAMBADA for a 92.5% increase in fairness, and (3) larger models can exhibit more bias; Pythia-6.9b makes more assumptions about gender than Pythia-160m, even when a subject's gender is not specified." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,laskar2023systematic,\cite{laskar2023systematic},"A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets",http://arxiv.org/abs/2305.18486v4,"The development of large language models (LLMs) such as ChatGPT has brought a lot of attention recently. However, their evaluation in the benchmark academic datasets remains under-explored due to the difficulty of evaluating the generative outputs produced by this model against the ground truth. In this paper, we aim to present a thorough evaluation of ChatGPT's performance on diverse academic datasets, covering tasks like question-answering, text summarization, code generation, commonsense reasoning, mathematical problem-solving, machine translation, bias detection, and ethical considerations. Specifically, we evaluate ChatGPT across 140 tasks and analyze 255K responses it generates in these datasets. This makes our work the largest evaluation of ChatGPT in NLP benchmarks. In short, our study aims to validate the strengths and weaknesses of ChatGPT in various tasks and provide insights for future research using LLMs. We also report a new emergent ability to follow multi-query instructions that we mostly found in ChatGPT and other instruction-tuned models. Our extensive evaluation shows that even though ChatGPT is capable of performing a wide variety of tasks, and may obtain impressive performance in several benchmark datasets, it is still far from achieving the ability to reliably solve many challenging tasks. By providing a thorough assessment of ChatGPT's performance across diverse NLP tasks, this paper sets the stage for a targeted deployment of ChatGPT-like LLMs in real-world applications.",True,True,"Laskar, Md Tahmid Rahman and Bari, M Saiful and Rahman, Mizanur and Bhuiyan, Md Amran Hossen and Joty, Shafiq and Huang, Jimmy Xiangji",2023.0,,,,,"A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets",A Systematic Study and Comprehensive Evaluation of ChatGPT on ...,https://arxiv.org/abs/2305.18486,"Image 2: arxiv logo>cs> arXiv:2305.18486 **arXiv:2305.18486** (cs) View a PDF of the paper titled A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, by Md Tahmid Rahman Laskar and 5 other authors View a PDF of the paper titled A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, by Md Tahmid Rahman Laskar and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle " "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,chu2024fairness,\cite{chu2024fairness},Fairness in Large Language Models: A Taxonomic Survey,http://arxiv.org/abs/2404.01349v2,"Large Language Models (LLMs) have demonstrated remarkable success across various domains. However, despite their promising performance in numerous real-world applications, most of these algorithms lack fairness considerations. Consequently, they may lead to discriminatory outcomes against certain communities, particularly marginalized populations, prompting extensive study in fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in traditional machine learning, entails exclusive backgrounds, taxonomies, and fulfillment techniques. To this end, this survey presents a comprehensive overview of recent advances in the existing literature concerning fair LLMs. Specifically, a brief introduction to LLMs is provided, followed by an analysis of factors contributing to bias in LLMs. Additionally, the concept of fairness in LLMs is discussed categorically, summarizing metrics for evaluating bias in LLMs and existing algorithms for promoting fairness. Furthermore, resources for evaluating bias in LLMs, including toolkits and datasets, are summarized. Finally, existing research challenges and open questions are discussed.",True,True,"Chu, Zhibo and Wang, Zichong and Zhang, Wenbin",2024.0,,,,ACM SIGKDD explorations newsletter,Fairness in Large Language Models: A Taxonomic Survey,Fairness in Large Language Models: A Taxonomic Survey,http://arxiv.org/pdf/2404.01349v2,"Large Language Models (LLMs) have demonstrated remarkable success across various domains. However, despite their promising performance in numerous real-world applications, most of these algorithms lack fairness considerations. Consequently, they may lead to discriminatory outcomes against certain communities, particularly marginalized populations, prompting extensive study in fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in traditional machine learning, entails exclusive backgrounds, taxonomies, and fulfillment techniques. To this end, this survey presents a comprehensive overview of recent advances in the existing literature concerning fair LLMs. Specifically, a brief introduction to LLMs is provided, followed by an analysis of factors contributing to bias in LLMs. Additionally, the concept of fairness in LLMs is discussed categorically, summarizing metrics for evaluating bias in LLMs and existing algorithms for promoting fairness. Furthermore, resources for evaluating bias in LLMs, including toolkits and datasets, are summarized. Finally, existing research challenges and open questions are discussed." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,wang2024ceb,\cite{wang2024ceb},"CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models",http://arxiv.org/abs/2407.02408v2,"As Large Language Models (LLMs) are increasingly deployed to handle various natural language processing (NLP) tasks, concerns regarding the potential negative societal impacts of LLM-generated content have also arisen. To evaluate the biases exhibited by LLMs, researchers have recently proposed a variety of datasets. However, existing bias evaluation efforts often focus on only a particular type of bias and employ inconsistent evaluation metrics, leading to difficulties in comparison across different datasets and LLMs. To address these limitations, we collect a variety of datasets designed for the bias evaluation of LLMs, and further propose CEB, a Compositional Evaluation Benchmark that covers different types of bias across different social groups and tasks. The curation of CEB is based on our newly proposed compositional taxonomy, which characterizes each dataset from three dimensions: bias types, social groups, and tasks. By combining the three dimensions, we develop a comprehensive evaluation strategy for the bias in LLMs. Our experiments demonstrate that the levels of bias vary across these dimensions, thereby providing guidance for the development of specific bias mitigation methods.",True,True,"Wang, Song and Wang, Peng and Zhou, Tong and Dong, Yushun and Tan, Zhen and Li, Jundong",2024.0,,,,arXiv preprint arXiv:2407.02408,"CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models",CEB: Compositional Evaluation Benchmark for Fairness in Large...,https://openreview.net/forum?id=IUmj2dw5se,Summary: This paper proposes a comprehensive benchmark for bias and fairness in large language models. The authors first propose a multi-layers taxonomy that "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,ye2024benchmarking,\cite{ye2024benchmarking},Benchmarking LLMs via Uncertainty Quantification,http://arxiv.org/abs/2401.12794v3,"The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty, which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce a new benchmarking approach for LLMs that integrates uncertainty quantification. Our examination involves nine LLMs (LLM series) spanning five representative natural language processing tasks. Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs. These results underscore the significance of incorporating uncertainty in the evaluation of LLMs.",True,True,"Ye, Fanghua and Yang, Mingming and Pang, Jianhui and Wang, Longyue and Wong, Derek F and Yilmaz, Emine and Shi, Shuming and Tu, Zhaopeng",2024.0,,,,arXiv preprint arXiv:2401.12794,Benchmarking LLMs via Uncertainty Quantification,Benchmarking LLMs via Uncertainty Quantification,http://arxiv.org/pdf/2401.12794v3,"The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty, which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce a new benchmarking approach for LLMs that integrates uncertainty quantification. Our examination involves nine LLMs (LLM series) spanning five representative natural language processing tasks. Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs. These results underscore the significance of incorporating uncertainty in the evaluation of LLMs." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,fabris2022algorithmic,\cite{fabris2022algorithmic},Algorithmic Fairness Datasets: the Story so Far,http://arxiv.org/abs/2202.01711v4,"Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being. As a result, a growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations. Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented. Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity). In this work, we target data documentation debt by surveying over two hundred datasets employed in algorithmic fairness research, and producing standardized and searchable documentation for each of them. Moreover we rigorously identify the three most popular fairness datasets, namely Adult, COMPAS and German Credit, for which we compile in-depth documentation. This unifying documentation effort supports multiple contributions. Firstly, we summarize the merits and limitations of Adult, COMPAS and German Credit, adding to and unifying recent scholarship, calling into question their suitability as general-purpose fairness benchmarks. Secondly, we document and summarize hundreds of available alternatives, annotating their domain and supported fairness tasks, along with additional properties of interest for fairness researchers. Finally, we analyze these datasets from the perspective of five important data curation topics: anonymization, consent, inclusivity, sensitive attributes, and transparency. We discuss different approaches and levels of attention to these topics, making them tangible, and distill them into a set of best practices for the curation of novel resources.",True,True,"Fabris, Alessandro and Messina, Stefano and Silvello, Gianmaria and Susto, Gian Antonio",2022.0,,,,,Algorithmic Fairness Datasets: the Story so Far,Algorithmic Fairness Datasets: the Story so Far,http://arxiv.org/pdf/2202.01711v4,"Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being. As a result, a growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations. Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented. Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity). In this work, we target data documentation debt by surveying over two hundred datasets employed in algorithmic fairness research, and producing standardized and searchable documentation for each of them. Moreover we rigorously identify the three most popular fairness datasets, namely Adult, COMPAS and German Credit, for which we compile in-depth documentation. This unifying documentation effort supports multiple contributions. Firstly, we summarize the merits and limitations of Adult, COMPAS and German Credit, adding to and unifying recent scholarship, calling into question their suitability as general-purpose fairness benchmarks. Secondly, we document and summarize hundreds of available alternatives, annotating their domain and supported fairness tasks, along with additional properties of interest for fairness researchers. Finally, we analyze these datasets from the perspective of five important data curation topics: anonymization, consent, inclusivity, sensitive attributes, and transparency. We discuss different approaches and levels of attention to these topics, making them tangible, and distill them into a set of best practices for the curation of novel resources." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,levesque2012winograd,\cite{levesque2012winograd},The Defeat of the Winograd Schema Challenge,http://arxiv.org/abs/2201.02387v3,"The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011. By 2019, a number of AI systems, based on large pre-trained transformer-based language models and fine-tuned on these kinds of problems, achieved better than 90% accuracy. In this paper, we review the history of the Winograd Schema Challenge and discuss the lasting contributions of the flurry of research that has taken place on the WSC in the last decade. We discuss the significance of various datasets developed for WSC, and the research community's deeper understanding of the role of surrogate tasks in assessing the intelligence of an AI system.",True,True,"Levesque, Hector and Davis, Ernest and Morgenstern, Leora",2012.0,,,,,The Defeat of the Winograd Schema Challenge,The Defeat of the Winograd Schema Challenge,http://arxiv.org/pdf/2201.02387v3,"The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011. By 2019, a number of AI systems, based on large pre-trained transformer-based language models and fine-tuned on these kinds of problems, achieved better than 90% accuracy. In this paper, we review the history of the Winograd Schema Challenge and discuss the lasting contributions of the flurry of research that has taken place on the WSC in the last decade. We discuss the significance of various datasets developed for WSC, and the research community's deeper understanding of the role of surrogate tasks in assessing the intelligence of an AI system." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,zhao2018gender,\cite{zhao2018gender},Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods,http://arxiv.org/abs/1804.06876v1,"We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at http://winobias.org.",True,True,"Zhao, Jieyu and Wang, Tianlu and Yatskar, Mark and Ordonez, Vicente and Chang, Kai-Wei",2018.0,,,,,Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods,Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods,http://arxiv.org/pdf/1804.06876v1,"We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at http://winobias.org." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,vanmassenhove2021neutral,\cite{vanmassenhove2021neutral},"NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives",http://arxiv.org/abs/2109.06105v1,"Recent years have seen an increasing need for gender-neutral and inclusive language. Within the field of NLP, there are various mono- and bilingual use cases where gender inclusive language is appropriate, if not preferred due to ambiguity or uncertainty in terms of the gender of referents. In this work, we present a rule-based and a neural approach to gender-neutral rewriting for English along with manually curated synthetic data (WinoBias+) and natural data (OpenSubtitles and Reddit) benchmarks. A detailed manual and automatic evaluation highlights how our NeuTral Rewriter, trained on data generated by the rule-based approach, obtains word error rates (WER) below 0.18% on synthetic, in-domain and out-domain test sets.",True,True,"Vanmassenhove, Eva and Emmery, Chris and Shterionov, Dimitar",2021.0,,,,,"NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives",NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic ...,https://www.researchgate.net/publication/357122955_NeuTral_Rewriter_A_Rule-Based_and_Neural_Approach_to_Automatic_Rewriting_into_Gender_Neutral_Alternatives,Our work falls Round-trip translation (from gender-neural to gender-biased) and neural text paraphrasing German [18] Rule-based gender rewriting "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,rudinger2018gender,\cite{rudinger2018gender},Gender Bias in Coreference Resolution,http://arxiv.org/abs/1804.09301v1,"We present an empirical study of gender bias in coreference resolution systems. We first introduce a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. With these ""Winogender schemas,"" we evaluate and confirm systematic gender bias in three publicly-available coreference resolution systems, and correlate this bias with real-world and textual gender statistics.",True,True,"Rudinger, Rachel and Naradowsky, Jason and Leonard, Brian and Van Durme, Benjamin",2018.0,,,,,Gender Bias in Coreference Resolution,Gender Bias in Coreference Resolution,http://arxiv.org/pdf/1804.09301v1,"We present an empirical study of gender bias in coreference resolution systems. We first introduce a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. With these ""Winogender schemas,"" we evaluate and confirm systematic gender bias in three publicly-available coreference resolution systems, and correlate this bias with real-world and textual gender statistics." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,srivastava2023beyond,\cite{srivastava2023beyond},"Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models",http://arxiv.org/abs/2206.04615v3,"Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit ""breakthrough"" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.",True,True,{BIG-bench authors},2023.0,,,,TMLR,"Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models",Quantifying and extrapolating the capabilities of language models,https://openreview.net/forum?id=uyTL5Bvosj,The paper introduces the Beyond the Imitation Game benchmark (BIG-bench) as a way to better understand the current and near-future capabilities and limitations "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,dhamala2021bold,\cite{dhamala2021bold},"BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation",http://arxiv.org/abs/2101.11718v1,"Recent advances in deep learning techniques have enabled machines to generate cohesive open-ended text when prompted with a sequence of words as context. While these models now empower many downstream applications from conversation bots to automatic storytelling, they have been shown to generate texts that exhibit social biases. To systematically study and benchmark social biases in open-ended language generation, we introduce the Bias in Open-Ended Language Generation Dataset (BOLD), a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology. We also propose new automated metrics for toxicity, psycholinguistic norms, and text gender polarity to measure social biases in open-ended text generation from multiple angles. An examination of text generated from three popular language models reveals that the majority of these models exhibit a larger social bias than human-written Wikipedia text across all domains. With these results we highlight the need to benchmark biases in open-ended language generation and caution users of language generation models on downstream tasks to be cognizant of these embedded prejudices.",True,True,"Dhamala, Jwala and Sun, Tony and Kumar, Varun and Krishna, Satyapriya and Pruksachatkun, Yada and Chang, Kai-Wei and Gupta, Rahul",2021.0,,,,,"BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation",Bias in Open-ended Language Generation Dataset (BOLD) - GitHub,https://github.com/amazon-science/bold,Bias in Open-ended Language Generation Dataset (BOLD) is a dataset to evaluate fairness in open-ended language generation in English language. "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,kotek2023gender,\cite{kotek2023gender},Gender bias and stereotypes in Large Language Models,http://arxiv.org/abs/2308.14921v1,"Large Language Models (LLMs) have made substantial progress in the past several months, shattering state-of-the-art benchmarks in many domains. This paper investigates LLMs' behavior with respect to gender stereotypes, a known issue for prior models. We use a simple paradigm to test the presence of gender bias, building on but differing from WinoBias, a commonly used gender bias dataset, which is likely to be included in the training data of current LLMs. We test four recently published LLMs and demonstrate that they express biased assumptions about men and women's occupations. Our contributions in this paper are as follows: (a) LLMs are 3-6 times more likely to choose an occupation that stereotypically aligns with a person's gender; (b) these choices align with people's perceptions better than with the ground truth as reflected in official job statistics; (c) LLMs in fact amplify the bias beyond what is reflected in perceptions or the ground truth; (d) LLMs ignore crucial ambiguities in sentence structure 95% of the time in our study items, but when explicitly prompted, they recognize the ambiguity; (e) LLMs provide explanations for their choices that are factually inaccurate and likely obscure the true reason behind their predictions. That is, they provide rationalizations of their biased behavior. This highlights a key property of these models: LLMs are trained on imbalanced datasets; as such, even with the recent successes of reinforcement learning with human feedback, they tend to reflect those imbalances back at us. As with other types of societal biases, we suggest that LLMs must be carefully tested to ensure that they treat minoritized individuals and communities equitably.",True,True,"Kotek, Hadas and Dockum, Rikker and Sun, David",2023.0,,,,,Gender bias and stereotypes in Large Language Models,Gender bias and stereotypes in Large Language Models,http://arxiv.org/pdf/2308.14921v1,"Large Language Models (LLMs) have made substantial progress in the past several months, shattering state-of-the-art benchmarks in many domains. This paper investigates LLMs' behavior with respect to gender stereotypes, a known issue for prior models. We use a simple paradigm to test the presence of gender bias, building on but differing from WinoBias, a commonly used gender bias dataset, which is likely to be included in the training data of current LLMs. We test four recently published LLMs and demonstrate that they express biased assumptions about men and women's occupations. Our contributions in this paper are as follows: (a) LLMs are 3-6 times more likely to choose an occupation that stereotypically aligns with a person's gender; (b) these choices align with people's perceptions better than with the ground truth as reflected in official job statistics; (c) LLMs in fact amplify the bias beyond what is reflected in perceptions or the ground truth; (d) LLMs ignore crucial ambiguities in sentence structure 95% of the time in our study items, but when explicitly prompted, they recognize the ambiguity; (e) LLMs provide explanations for their choices that are factually inaccurate and likely obscure the true reason behind their predictions. That is, they provide rationalizations of their biased behavior. This highlights a key property of these models: LLMs are trained on imbalanced datasets; as such, even with the recent successes of reinforcement learning with human feedback, they tend to reflect those imbalances back at us. As with other types of societal biases, we suggest that LLMs must be carefully tested to ensure that they treat minoritized individuals and communities equitably." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,parrish2021bbq,\cite{parrish2021bbq},BBQ: A Hand-Built Bias Benchmark for Question Answering,http://arxiv.org/abs/2110.08193v2,"It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluates model responses at two levels: (i) given an under-informative context, we test how strongly responses reflect social biases, and (ii) given an adequately informative context, we test whether the model's biases override a correct answer choice. We find that models often rely on stereotypes when the context is under-informative, meaning the model's outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conflicts, with this difference widening to over 5 points on examples targeting gender for most models tested.",True,True,"Parrish, Alicia and Chen, Angelica and Nangia, Nikita and Padmakumar, Vishakh and Phang, Jason and Thompson, Jana and Htut, Phu Mon and Bowman, Samuel R",2021.0,,,,,BBQ: A Hand-Built Bias Benchmark for Question Answering,BBQ: A hand-built bias benchmark for question answering,https://aclanthology.org/2022.findings-acl.165/,"by A Parrish · 2022 · Cited by 512 — We introduce the Bias Benchmark for QA (BBQ), a dataset of question-sets constructed by the authors that highlight attested social biases." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,webster-etal-2018-mind,\cite{webster-etal-2018-mind},Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns,http://arxiv.org/abs/1810.05201v1,"Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we find gender bias in existing corpora and systems favoring masculine entities. To address this, we present and release GAP, a gender-balanced labeled corpus of 8,908 ambiguous pronoun-name pairs sampled to provide diverse coverage of challenges posed by real-world text. We explore a range of baselines which demonstrate the complexity of the challenge, the best achieving just 66.9% F1. We show that syntactic structure and continuous neural models provide promising, complementary cues for approaching the challenge.",True,True,"Webster, Kellie and Recasens, Marta and Axelrod, Vera and Baldridge, Jason",2018.0,,,,Transactions of the Association for Computational Linguistics,Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns,Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns,http://arxiv.org/pdf/1810.05201v1,"Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we find gender bias in existing corpora and systems favoring masculine entities. To address this, we present and release GAP, a gender-balanced labeled corpus of 8,908 ambiguous pronoun-name pairs sampled to provide diverse coverage of challenges posed by real-world text. We explore a range of baselines which demonstrate the complexity of the challenge, the best achieving just 66.9% F1. We show that syntactic structure and continuous neural models provide promising, complementary cues for approaching the challenge." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,pant-dadu-2022-incorporating,\cite{pant-dadu-2022-incorporating},Incorporating Subjectivity into Gendered Ambiguous Pronoun ({GAP}) Resolution using Style Transfer,,,True,False,"Pant, Kartikey and Dadu, Tanvi",2022.0,,,,,Incorporating Subjectivity into Gendered Ambiguous Pronoun ({GAP}) Resolution using Style Transfer,Incorporating Subjectivity into Gendered Ambiguous Pronoun (GAP ...,https://www.researchgate.net/publication/362266417_Incorporating_Subjectivity_into_Gendered_Ambiguous_Pronoun_GAP_Resolution_using_Style_Transfer,"Incorporating Subjectivity into Gendered Ambiguous Pronoun (GAP) Resolution using Style Transfer ... GAP-Subjective is the same size as GAP, with 8,908 instances." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,levy-etal-2021-collecting-large,\cite{levy-etal-2021-collecting-large},"Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation",http://arxiv.org/abs/2109.03858v2,"Recent works have found evidence of gender bias in models of machine translation and coreference resolution using mostly synthetic diagnostic datasets. While these quantify bias in a controlled experiment, they often do so on a small scale and consist mostly of artificial, out-of-distribution sentences. In this work, we find grammatical patterns indicating stereotypical and non-stereotypical gender-role assignments (e.g., female nurses versus male dancers) in corpora from three domains, resulting in a first large-scale gender bias dataset of 108K diverse real-world English sentences. We manually verify the quality of our corpus and use it to evaluate gender bias in various coreference resolution and machine translation models. We find that all tested models tend to over-rely on gender stereotypes when presented with natural inputs, which may be especially harmful when deployed in commercial systems. Finally, we show that our dataset lends itself to finetuning a coreference resolution model, finding it mitigates bias on a held out set. Our dataset and models are publicly available at www.github.com/SLAB-NLP/BUG. We hope they will spur future research into gender bias evaluation mitigation techniques in realistic settings.",True,True,"Levy, Shahar and Lazar, Koren and Stanovsky, Gabriel",2021.0,,,,,"Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation",[PDF] Collecting a Large-Scale Gender Bias Dataset for Coreference ...,https://aclanthology.org/2021.findings-emnlp.211.pdf,"We use BUG to evaluate gender bias in various coref- erence resolution and machine translation models, finding that models tend to make" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,gawlikowski2023survey,\cite{gawlikowski2023survey},A Survey of Uncertainty in Deep Neural Networks,http://arxiv.org/abs/2107.03342v3,"Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.",True,True,"Gawlikowski, Jakob and Tassi, Cedrique Rovile Njieutcheu and Ali, Mohsin and Lee, Jongseok and Humt, Matthias and Feng, Jianxiang and Kruspe, Anna and Triebel, Rudolph and Jung, Peter and Roscher, Ribana and others",2023.0,,,,Artificial Intelligence Review,A Survey of Uncertainty in Deep Neural Networks,A Survey of Uncertainty in Deep Neural Networks,http://arxiv.org/pdf/2107.03342v3,"Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,hu2023uncertainty,\cite{hu2023uncertainty},"Uncertainty in Natural Language Processing: Sources, Quantification, and Applications",http://arxiv.org/abs/2306.04459v1,"As a main field of artificial intelligence, natural language processing (NLP) has achieved remarkable success via deep neural networks. Plenty of NLP tasks have been addressed in a unified manner, with various tasks being associated with each other through sharing the same paradigm. However, neural networks are black boxes and rely on probability computation. Making mistakes is inevitable. Therefore, estimating the reliability and trustworthiness (in other words, uncertainty) of neural networks becomes a key research direction, which plays a crucial role in reducing models' risks and making better decisions. Therefore, in this survey, we provide a comprehensive review of uncertainty-relevant works in the NLP field. Considering the data and paradigms characteristics, we first categorize the sources of uncertainty in natural language into three types, including input, system, and output. Then, we systemically review uncertainty quantification approaches and the main applications. Finally, we discuss the challenges of uncertainty estimation in NLP and discuss potential future directions, taking into account recent trends in the field. Though there have been a few surveys about uncertainty estimation, our work is the first to review uncertainty from the NLP perspective.",True,True,"Hu, Mengting and Zhang, Zhen and Zhao, Shiwan and Huang, Minlie and Wu, Bingzhe",2023.0,,,,arXiv preprint arXiv:2306.04459,"Uncertainty in Natural Language Processing: Sources, Quantification, and Applications",[PDF] Uncertainty in Natural Language Processing: Sources ... - arXiv,https://arxiv.org/pdf/2306.04459,"Then, we systemically review uncertainty quantification approaches and the main applications. Finally, we discuss the challenges of uncertainty." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,huang2023look,\cite{huang2023look},"Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models",http://arxiv.org/abs/2307.10236v4,"The recent performance leap of Large Language Models (LLMs) opens up new opportunities across numerous industrial applications and domains. However, erroneous generations, such as false predictions, misinformation, and hallucination made by LLMs, have also raised severe concerns for the trustworthiness of LLMs', especially in safety-, security- and reliability-sensitive scenarios, potentially hindering real-world adoptions. While uncertainty estimation has shown its potential for interpreting the prediction risks made by general machine learning (ML) models, little is known about whether and to what extent it can help explore an LLM's capabilities and counteract its undesired behavior. To bridge the gap, in this paper, we initiate an exploratory study on the risk assessment of LLMs from the lens of uncertainty. In particular, we experiment with twelve uncertainty estimation methods and four LLMs on four prominent natural language processing (NLP) tasks to investigate to what extent uncertainty estimation techniques could help characterize the prediction risks of LLMs. Our findings validate the effectiveness of uncertainty estimation for revealing LLMs' uncertain/non-factual predictions. In addition to general NLP tasks, we extensively conduct experiments with four LLMs for code generation on two datasets. We find that uncertainty estimation can potentially uncover buggy programs generated by LLMs. Insights from our study shed light on future design and development for reliable LLMs, facilitating further research toward enhancing the trustworthiness of LLMs.",True,True,"Huang, Yuheng and Song, Jiayang and Wang, Zhijie and Zhao, Shengming and Chen, Huaming and Juefei-Xu, Felix and Ma, Lei",2023.0,,,,arXiv preprint arXiv:2307.10236,"Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models",Look Before You Leap: An Exploratory Study of Uncertainty ... - arXiv,https://arxiv.org/abs/2307.10236,The recent performance leap of Large Language Models (LLMs) opens up new opportunities across numerous industrial applications and domains. "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,fadeeva2023lm,\cite{fadeeva2023lm},LM-polygraph: Uncertainty estimation for language models,,,True,False,"Fadeeva, Ekaterina and Vashurin, Roman and Tsvigun, Akim and Vazhentsev, Artem and Petrakov, Sergey and Fedyanin, Kirill and Vasilev, Daniil and Goncharova, Elizaveta and Panchenko, Alexander and Panov, Maxim and others",2023.0,,,,,LM-polygraph: Uncertainty estimation for language models,LM-Polygraph: Uncertainty Estimation for Language Models,http://arxiv.org/pdf/2311.07383v1,"Recent advancements in the capabilities of large language models (LLMs) have paved the way for a myriad of groundbreaking applications in various fields. However, a significant challenge arises as these models often ""hallucinate"", i.e., fabricate facts without providing users an apparent means to discern the veracity of their statements. Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of LLMs. However, to date, research on UE methods for LLMs has been focused primarily on theoretical rather than engineering contributions. In this work, we tackle this issue by introducing LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python. Additionally, it introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores, empowering end-users to discern unreliable responses. LM-Polygraph is compatible with the most recent LLMs, including BLOOMz, LLaMA-2, ChatGPT, and GPT-4, and is designed to support future releases of similarly-styled LMs." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,kendall2017uncertainties,\cite{kendall2017uncertainties},"What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?",http://arxiv.org/abs/1703.04977v2,"There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.",True,True,"Kendall, Alex and Gal, Yarin",2017.0,,,,NeurIPS,"What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?",[PDF] What Uncertainties Do We Need in Bayesian Deep Learning ... - NIPS,http://papers.neurips.cc/paper/7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision.pdf,"Quantifying uncertainty in computer vision applications can be largely divided into regression set- tings such as depth regression, and classification settings" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,bridle1990probabilistic,\cite{bridle1990probabilistic},"Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition",,,True,False,"Bridle, John S",1990.0,,,,,"Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition",PROBABILISTIC INTERPRETATION OF FEEDFORWARD ...,https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=818b3279ba393e0c0aeea200652199e8f4c59942,"by M COSTA · Cited by 37 — J. S. Bridle 1989, \Probabilistic interpretation of feedforward classi cation network outputs, with rela- tionships to statistical pattern recognition,"" in Neu-." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,hendrycks2017a,\cite{hendrycks2017a},"A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks",http://arxiv.org/abs/1610.02136v3,"We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.",True,True,Dan Hendrycks and Kevin Gimpel,2017.0,,,,,"A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks",A Baseline for Detecting Misclassified and Out-of- ...,https://arxiv.org/abs/1610.02136,by D Hendrycks · 2016 · Cited by 4553 — We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,jurafsky2000speech,\cite{jurafsky2000speech},"Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition",,,True,False,"Jurafsky, Daniel and Martin, James H",2000.0,,,,,"Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition",Speech and Language Processing: An Introduction to Natural ...,https://www.amazon.com/Speech-Language-Processing-Introduction-Computational/dp/0130950696,"An introduction to natural language processing, computational linguistics and speech recognition. ISBN-13: 978-0130950697, ISBN-10: 0130950696." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,fomicheva2020unsupervised,\cite{fomicheva2020unsupervised},Unsupervised Quality Estimation for Neural Machine Translation,http://arxiv.org/abs/2005.10608v2,"Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation and time for training. As an alternative, we devise an unsupervised approach to QE where no training or access to additional resources besides the MT system itself is required. Different from most of the current work that treats the MT system as a black box, we explore useful information that can be extracted from the MT system as a by-product of translation. By employing methods for uncertainty quantification, we achieve very good correlation with human judgments of quality, rivalling state-of-the-art supervised QE models. To evaluate our approach we collect the first dataset that enables work on both black-box and glass-box approaches to QE.",True,True,"Fomicheva, Marina and Sun, Shuo and Yankovskaya, Lisa and Blain, Fr{\'e}d{\'e}ric and Guzm{\'a}n, Francisco and Fishel, Mark and Aletras, Nikolaos and Chaudhary, Vishrav and Specia, Lucia",2020.0,,,,,Unsupervised Quality Estimation for Neural Machine Translation,Unsupervised Quality Estimation for Neural Machine Translation,http://arxiv.org/pdf/2005.10608v2,"Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation and time for training. As an alternative, we devise an unsupervised approach to QE where no training or access to additional resources besides the MT system itself is required. Different from most of the current work that treats the MT system as a black box, we explore useful information that can be extracted from the MT system as a by-product of translation. By employing methods for uncertainty quantification, we achieve very good correlation with human judgments of quality, rivalling state-of-the-art supervised QE models. To evaluate our approach we collect the first dataset that enables work on both black-box and glass-box approaches to QE." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,malinin2021uncertainty,\cite{malinin2021uncertainty},Uncertainty Estimation in Autoregressive Structured Prediction,http://arxiv.org/abs/2002.07650v5,"Uncertainty estimation is important for ensuring safety and robustness of AI systems. While most research in the area has focused on un-structured prediction tasks, limited work has investigated general uncertainty estimation approaches for structured prediction. Thus, this work aims to investigate uncertainty estimation for autoregressive structured prediction tasks within a single unified and interpretable probabilistic ensemble-based framework. We consider: uncertainty estimation for sequence data at the token-level and complete sequence-level; interpretations for, and applications of, various measures of uncertainty; and discuss both the theoretical and practical challenges associated with obtaining them. This work also provides baselines for token-level and sequence-level error detection, and sequence-level out-of-domain input detection on the WMT'14 English-French and WMT'17 English-German translation and LibriSpeech speech recognition datasets.",True,True,"Malinin, Andrey and Gales, Mark",2021.0,,,,,Uncertainty Estimation in Autoregressive Structured Prediction,Uncertainty Estimation in Autoregressive Structured Prediction,http://arxiv.org/pdf/2002.07650v5,"Uncertainty estimation is important for ensuring safety and robustness of AI systems. While most research in the area has focused on un-structured prediction tasks, limited work has investigated general uncertainty estimation approaches for structured prediction. Thus, this work aims to investigate uncertainty estimation for autoregressive structured prediction tasks within a single unified and interpretable probabilistic ensemble-based framework. We consider: uncertainty estimation for sequence data at the token-level and complete sequence-level; interpretations for, and applications of, various measures of uncertainty; and discuss both the theoretical and practical challenges associated with obtaining them. This work also provides baselines for token-level and sequence-level error detection, and sequence-level out-of-domain input detection on the WMT'14 English-French and WMT'17 English-German translation and LibriSpeech speech recognition datasets." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,vovk2005algorithmic,\cite{vovk2005algorithmic},Algorithmic learning in a random world,,,True,False,"Vovk, Vladimir and Gammerman, Alexander and Shafer, Glenn",2005.0,,,,,Algorithmic learning in a random world,Algorithmic Learning in a Random World,https://www.amazon.ca/Algorithmic-Learning-Random-World-Vladimir/dp/0387001522,Algorithmic Learning in a Random Worlddescribes recent theoretical and experimental developments in building computable approximations to Kolmogorov's "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,gal2016dropout,\cite{gal2016dropout},"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning",http://arxiv.org/abs/1506.02142v6,"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.",True,True,"Gal, Yarin and Ghahramani, Zoubin",2016.0,,,,,"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning",Representing Model Uncertainty in Deep Learning - arXiv,https://arxiv.org/abs/1506.02142,In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,yu2022learning,\cite{yu2022learning},Learning Uncertainty for Unknown Domains with Zero-Target-Assumption,,,True,False,"Yu, Yu and Sajjad, Hassan and Xu, Jia",2022.0,,,,,Learning Uncertainty for Unknown Domains with Zero-Target-Assumption,Learning Uncertainty for Unknown Domains with Zero-Target ...,https://openreview.net/forum?id=pWVASryOyFw,"In this paper, the authors propose to use a Maximum-Entropy Rewarded Reinforcement Learning framework to select training data for NLP tasks, the goal of which is to maximize generalization. Weaknesses: The authors only proved the role of entropy in selecting data, but this paper does not elaborate on the motivation and advantages of introducing complex reinforcement learning to train a policy network. 1. “The authors only proved the role of entropy in selecting data, but this paper does not elaborate on the motivation and advantages of introducing complex reinforcement learning to train a policy network.” This paper proposes a method for optimal training set selection with the goal of maximizing generalization to multiple unknown target domains for NLP tasks." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,kuhn2023semantic,\cite{kuhn2023semantic},"Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation",http://arxiv.org/abs/2302.09664v3,"We introduce a method to measure uncertainty in large language models. For tasks like question answering, it is essential to know when we can trust the natural language outputs of foundation models. We show that measuring uncertainty in natural language is challenging because of ""semantic equivalence"" -- different sentences can mean the same thing. To overcome these challenges we introduce semantic entropy -- an entropy which incorporates linguistic invariances created by shared meanings. Our method is unsupervised, uses only a single model, and requires no modifications to off-the-shelf language models. In comprehensive ablation studies we show that the semantic entropy is more predictive of model accuracy on question answering data sets than comparable baselines.",True,True,"Kuhn, Lorenz and Gal, Yarin and Farquhar, Sebastian",2023.0,,,,,"Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation",Semantic Uncertainty: Linguistic Invariances for ... - OpenReview,https://openreview.net/forum?id=VD-AYtP0dve,"Summary: The paper proposes an approach called semantic entropy, which incorporates linguistic invariances for uncertainty estimation in NLG." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,duan2023shifting,\cite{duan2023shifting},"Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models",http://arxiv.org/abs/2307.01379v3,"Large Language Models (LLMs) show promising results in language generation and instruction following but frequently ""hallucinate"", making their outputs less reliable. Despite Uncertainty Quantification's (UQ) potential solutions, implementing it accurately within LLMs is challenging. Our research introduces a simple heuristic: not all tokens in auto-regressive LLM text equally represent the underlying meaning, as ""linguistic redundancy"" often allows a few keywords to convey the essence of long sentences. However, current methods underestimate this inequality when assessing uncertainty, causing tokens with limited semantics to be equally or excessively weighted in UQ. To correct this, we propose Shifting Attention to more Relevant (SAR) components at both token- and sentence-levels for better UQ. We conduct extensive experiments involving a range of popular ""off-the-shelf"" LLMs, such as Vicuna, WizardLM, and LLaMA-2-chat, with model sizes extending up to 33B parameters. We evaluate various free-form question-answering tasks, encompassing domains such as reading comprehension, science Q&A, and medical Q&A. Our experimental results, coupled with a comprehensive demographic analysis, demonstrate the superior performance of SAR. The code is available at https://github.com/jinhaoduan/SAR.",True,True,"Duan, Jinhao and Cheng, Hao and Wang, Shiqi and Wang, Chenan and Zavalny, Alex and Xu, Renjing and Kailkhura, Bhavya and Xu, Kaidi",2024.0,,,,,"Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models",Shifting Attention to Relevance: Towards the Predictive ...,https://arxiv.org/abs/2307.01379,"by J Duan · 2023 · Cited by 172 — Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models. Authors:Jinhao Duan, Hao" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,kadavath2022language,\cite{kadavath2022language},Language Models (Mostly) Know What They Know,http://arxiv.org/abs/2207.05221v4,"We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated on diverse multiple choice and true/false questions when they are provided in the right format. Thus we can approach self-evaluation on open-ended sampling tasks by asking models to first propose answers, and then to evaluate the probability ""P(True)"" that their answers are correct. We find encouraging performance, calibration, and scaling for P(True) on a diverse array of tasks. Performance at self-evaluation further improves when we allow models to consider many of their own samples before predicting the validity of one specific possibility. Next, we investigate whether models can be trained to predict ""P(IK)"", the probability that ""I know"" the answer to a question, without reference to any particular proposed answer. Models perform well at predicting P(IK) and partially generalize across tasks, though they struggle with calibration of P(IK) on new tasks. The predicted P(IK) probabilities also increase appropriately in the presence of relevant source materials in the context, and in the presence of hints towards the solution of mathematical word problems. We hope these observations lay the groundwork for training more honest models, and for investigating how honesty generalizes to cases where models are trained on objectives other than the imitation of human writing.",True,True,"Kadavath, Saurav and Conerly, Tom and Askell, Amanda and Henighan, Tom and Drain, Dawn and Perez, Ethan and Schiefer, Nicholas and Hatfield-Dodds, Zac and DasSarma, Nova and Tran-Johnson, Eli and others",2022.0,,,,arXiv preprint arXiv:2207.05221,Language Models (Mostly) Know What They Know,Language Models (Mostly) Know What They Know,http://arxiv.org/pdf/2207.05221v4,"We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated on diverse multiple choice and true/false questions when they are provided in the right format. Thus we can approach self-evaluation on open-ended sampling tasks by asking models to first propose answers, and then to evaluate the probability ""P(True)"" that their answers are correct. We find encouraging performance, calibration, and scaling for P(True) on a diverse array of tasks. Performance at self-evaluation further improves when we allow models to consider many of their own samples before predicting the validity of one specific possibility. Next, we investigate whether models can be trained to predict ""P(IK)"", the probability that ""I know"" the answer to a question, without reference to any particular proposed answer. Models perform well at predicting P(IK) and partially generalize across tasks, though they struggle with calibration of P(IK) on new tasks. The predicted P(IK) probabilities also increase appropriately in the presence of relevant source materials in the context, and in the presence of hints towards the solution of mathematical word problems. We hope these observations lay the groundwork for training more honest models, and for investigating how honesty generalizes to cases where models are trained on objectives other than the imitation of human writing." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,malinin2018predictive,\cite{malinin2018predictive},Predictive Uncertainty Estimation via Prior Networks,http://arxiv.org/abs/1802.10501v4,"Estimating how uncertain an AI system is in its predictions is important to improve the safety of such systems. Uncertainty in predictive can result from uncertainty in model parameters, irreducible data uncertainty and uncertainty due to distributional mismatch between the test and training data distributions. Different actions might be taken depending on the source of the uncertainty so it is important to be able to distinguish between them. Recently, baseline tasks and metrics have been defined and several practical methods to estimate uncertainty developed. These methods, however, attempt to model uncertainty due to distributional mismatch either implicitly through model uncertainty or as data uncertainty. This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty. PNs do this by parameterizing a prior distribution over predictive distributions. This work focuses on uncertainty for classification and evaluates PNs on the tasks of identifying out-of-distribution (OOD) samples and detecting misclassification on the MNIST dataset, where they are found to outperform previous methods. Experiments on synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian methods PNs are able to distinguish between data and distributional uncertainty.",True,True,"Malinin, Andrey and Gales, Mark",2018.0,,,,,Predictive Uncertainty Estimation via Prior Networks,Predictive Uncertainty Estimation via Prior Networks,http://arxiv.org/pdf/1802.10501v4,"Estimating how uncertain an AI system is in its predictions is important to improve the safety of such systems. Uncertainty in predictive can result from uncertainty in model parameters, irreducible data uncertainty and uncertainty due to distributional mismatch between the test and training data distributions. Different actions might be taken depending on the source of the uncertainty so it is important to be able to distinguish between them. Recently, baseline tasks and metrics have been defined and several practical methods to estimate uncertainty developed. These methods, however, attempt to model uncertainty due to distributional mismatch either implicitly through model uncertainty or as data uncertainty. This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty. PNs do this by parameterizing a prior distribution over predictive distributions. This work focuses on uncertainty for classification and evaluates PNs on the tasks of identifying out-of-distribution (OOD) samples and detecting misclassification on the MNIST dataset, where they are found to outperform previous methods. Experiments on synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian methods PNs are able to distinguish between data and distributional uncertainty." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,darrin2022rainproof,\cite{darrin2022rainproof},"Rainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data",http://arxiv.org/abs/2212.09171v2,"Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.",True,True,"Darrin, Maxime and Piantanida, Pablo and Colombo, Pierre",2023.0,,,,,"Rainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data",RAINPROOF: An umbrella to shield text generators from ...,https://aclanthology.org/2023.emnlp-main.357.pdf,"by M Darrin · 2023 · Cited by 39 — RAINPROOF is a Relative informAItioN Projection OOD detection framework that shields text generators from out-of-distribution data, using soft-" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,vashurin2025benchmarking,\cite{vashurin2025benchmarking},Benchmarking uncertainty quantification methods for large language models with lm-polygraph,,,True,False,"Vashurin, Roman and Fadeeva, Ekaterina and Vazhentsev, Artem and Rvanova, Lyudmila and Vasilev, Daniil and Tsvigun, Akim and Petrakov, Sergey and Xing, Rui and Sadallah, Abdelrahman and Grishchenkov, Kirill and others",2025.0,,,,Transactions of the Association for Computational Linguistics,Benchmarking uncertainty quantification methods for large language models with lm-polygraph,Benchmarking Uncertainty Quantification Methods for Large ...,https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00737/128713/Benchmarking-Uncertainty-Quantification-Methods,"Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph | Transactions of the Association for Computational Linguistics | MIT Press Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev, Lyudmila Rvanova, Daniil Vasilev, Akim Tsvigun, Sergey Petrakov, Rui Xing, Abdelrahman Sadallah, Kirill Grishchenkov, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov, Artem Shelmanov; Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph. We propose a new comprehensive benchmark for the evaluation of UQ and uncertainty normalization methods for LLMs. The benchmark can assess the calibration of uncertainty scores and their effectiveness in selective QA/generation and claim-level fact-checking (hallucination detection).1 PRR↑ 50% with various generation quality metrics for UQ methods in selective QA tasks with the Stable LM 2 12B model." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,santilli2024spurious,\cite{santilli2024spurious},On a spurious interaction between uncertainty scores and answer evaluation metrics in generative qa tasks,,,True,False,"Santilli, Andrea and Xiong, Miao and Kirchhof, Michael and Rodriguez, Pau and Danieli, Federico and Suau, Xavier and Zappella, Luca and Williamson, Sinead and Golinski, Adam",2024.0,,,,,On a spurious interaction between uncertainty scores and answer evaluation metrics in generative qa tasks,On a Spurious Interaction between Uncertainty Scores & ...,https://openreview.net/pdf?id=jGtL0JFdeD,"by A Santilli · Cited by 3 — In this paper, we highlight that some UQ methods and answer evaluation metrics are spuriously correlated via the response length, which leads to falsely" "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,santilli2025revisiting,\cite{santilli2025revisiting},"Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results",http://arxiv.org/abs/2504.13677v2,"Uncertainty Quantification (UQ) in Language Models (LMs) is key to improving their safety and reliability. Evaluations often use metrics like AUROC to assess how well UQ methods (e.g., negative sequence probabilities) correlate with task correctness functions (e.g., ROUGE-L). We show that mutual biases--when both UQ methods and correctness functions are biased by the same factors--systematically distort evaluation. First, we formally prove that any mutual bias non-randomly skews AUROC rankings, compromising benchmark integrity. Second, we confirm this happens empirically by testing 7 widely used correctness functions, from lexical-based and embedding-based metrics to LM-as-a-judge approaches, across 4 datasets x 4 models x 8 UQ methods. Our analysis shows that length biases in correctness functions distort UQ assessments by interacting with length biases in UQ methods. We identify LM-as-a-judge methods as the least length-biased, offering a promising path for a fairer UQ evaluation.",True,True,"Santilli, Andrea and Golinski, Adam and Kirchhof, Michael and Danieli, Federico and Blaas, Arno and Xiong, Miao and Zappella, Luca and Williamson, Sinead",2025.0,,,,arXiv preprint arXiv:2504.13677,"Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results",Spurious Interactions with Response Length Bias Results,https://arxiv.org/pdf/2504.13677?,by A Santilli · 2025 · Cited by 3 — Uncertainty Quantification (UQ) in Language. Models (LMs) is key to improving their safety and reliability. Evaluations often use metrics. "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,mehta2024evaluating,\cite{mehta2024evaluating},"Evaluating the Fairness of Deep Learning Uncertainty Estimates in Medical Image Analysis",http://arxiv.org/abs/2303.03242v1,"Although deep learning (DL) models have shown great success in many medical image analysis tasks, deployment of the resulting models into real clinical contexts requires: (1) that they exhibit robustness and fairness across different sub-populations, and (2) that the confidence in DL model predictions be accurately expressed in the form of uncertainties. Unfortunately, recent studies have indeed shown significant biases in DL models across demographic subgroups (e.g., race, sex, age) in the context of medical image analysis, indicating a lack of fairness in the models. Although several methods have been proposed in the ML literature to mitigate a lack of fairness in DL models, they focus entirely on the absolute performance between groups without considering their effect on uncertainty estimation. In this work, we present the first exploration of the effect of popular fairness models on overcoming biases across subgroups in medical image analysis in terms of bottom-line performance, and their effects on uncertainty quantification. We perform extensive experiments on three different clinically relevant tasks: (i) skin lesion classification, (ii) brain tumour segmentation, and (iii) Alzheimer's disease clinical score regression. Our results indicate that popular ML methods, such as data-balancing and distributionally robust optimization, succeed in mitigating fairness issues in terms of the model performances for some of the tasks. However, this can come at the cost of poor uncertainty estimates associated with the model predictions. This tradeoff must be mitigated if fairness models are to be adopted in medical image analysis.",True,True,"Mehta, Raghav and Shui, Changjian and Arbel, Tal",2024.0,,,,,"Evaluating the Fairness of Deep Learning Uncertainty Estimates in Medical Image Analysis",Evaluating the Fairness of Deep Learning Uncertainty Estimates in ...,https://arxiv.org/abs/2303.03242,"In this work, we present the first exploration of the effect of popular fairness models on overcoming biases across subgroups in medical image analysis." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,kuzmin-etal-2023-uncertainty,\cite{kuzmin-etal-2023-uncertainty},Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?,,,True,False,"Kuzmin, Gleb and Vazhentsev, Artem and Shelmanov, Artem and Han, Xudong and Suster, Simon and Panov, Maxim and Panchenko, Alexander and Baldwin, Timothy",2023.0,,https://aclanthology.org/2023.ijcnlp-main.48/,10.18653/v1/2023.ijcnlp-main.48,,Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?,Uncertainty Estimation for Debiased Models: Does Fairness Hurt ...,https://aclanthology.org/2023.ijcnlp-main.48/,Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?. In Proceedings of the 13th International Joint Conference on Natural Language "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,kuzucu2023uncertainty,\cite{kuzucu2023uncertainty},Uncertainty as a Fairness Measure,,,True,False,"Kuzucu, Selim and Cheong, Jiaee and Gunes, Hatice and Kalkan, Sinan",2023.0,,,,arXiv preprint arXiv:2312.11299,Uncertainty as a Fairness Measure,[2312.11299] Uncertainty-based Fairness Measures - arXiv,https://arxiv.org/abs/2312.11299,"We introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,kaiser2022uncertainty,\cite{kaiser2022uncertainty},Uncertainty-aware predictive modeling for fair data-driven decisions,,,True,False,"Kaiser, Patrick and Kern, Christoph and R{\""u}gamer, David",2022.0,,,,arXiv preprint arXiv:2211.02730,Uncertainty-aware predictive modeling for fair data-driven decisions,Uncertainty-aware predictive modeling for fair data-driven ...,https://openreview.net/forum?id=8DXj-ze0x_s,"Uncertainty-aware predictive modeling for fair data-driven decisions | OpenReview Blind Submission by TSRML • Uncertainty-aware predictive modeling for fair data-driven decisions 23 Oct 2022, 01:52 NeurIPS 2022 Workshop TSRML Paper72 Decision Readers: EveryoneShow Revisions The authors highlight the importance of accounting uncertainty in automated decision-making (ADM) systems in order to further promote fairness and propose the use of the reject option in ADM, which is triggered when the level of uncertainty is above a certain threshold. This paper intends to develop a fair decision-making system leveraging a distributional prediction model and a distribution-aware decision-making module. This paper connects uncertainty with fairness in automated decision-making systems. 2. This paper indeed failed to propose any new uncertainty quantification method that is designed for the decision task." "Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs",2505.23996v1,tahir2023fairness,\cite{tahir2023fairness},Fairness through Aleatoric Uncertainty,http://arxiv.org/abs/2304.03646v2,"We propose a simple yet effective solution to tackle the often-competing goals of fairness and utility in classification tasks. While fairness ensures that the model's predictions are unbiased and do not discriminate against any particular group or individual, utility focuses on maximizing the model's predictive performance. This work introduces the idea of leveraging aleatoric uncertainty (e.g., data ambiguity) to improve the fairness-utility trade-off. Our central hypothesis is that aleatoric uncertainty is a key factor for algorithmic fairness and samples with low aleatoric uncertainty are modeled more accurately and fairly than those with high aleatoric uncertainty. We then propose a principled model to improve fairness when aleatoric uncertainty is high and improve utility elsewhere. Our approach first intervenes in the data distribution to better decouple aleatoric uncertainty and epistemic uncertainty. It then introduces a fairness-utility bi-objective loss defined based on the estimated aleatoric uncertainty. Our approach is theoretically guaranteed to improve the fairness-utility trade-off. Experimental results on both tabular and image datasets show that the proposed approach outperforms state-of-the-art methods w.r.t. the fairness-utility trade-off and w.r.t. both group and individual fairness metrics. This work presents a fresh perspective on the trade-off between utility and algorithmic fairness and opens a key avenue for the potential of using prediction uncertainty in fair machine learning.",True,True,"Tahir, Anique and Cheng, Lu and Liu, Huan",2023.0,,,,,Fairness through Aleatoric Uncertainty,Fairness through Aleatoric Uncertainty,http://arxiv.org/pdf/2304.03646v2,"We propose a simple yet effective solution to tackle the often-competing goals of fairness and utility in classification tasks. While fairness ensures that the model's predictions are unbiased and do not discriminate against any particular group or individual, utility focuses on maximizing the model's predictive performance. This work introduces the idea of leveraging aleatoric uncertainty (e.g., data ambiguity) to improve the fairness-utility trade-off. Our central hypothesis is that aleatoric uncertainty is a key factor for algorithmic fairness and samples with low aleatoric uncertainty are modeled more accurately and fairly than those with high aleatoric uncertainty. We then propose a principled model to improve fairness when aleatoric uncertainty is high and improve utility elsewhere. Our approach first intervenes in the data distribution to better decouple aleatoric uncertainty and epistemic uncertainty. It then introduces a fairness-utility bi-objective loss defined based on the estimated aleatoric uncertainty. Our approach is theoretically guaranteed to improve the fairness-utility trade-off. Experimental results on both tabular and image datasets show that the proposed approach outperforms state-of-the-art methods w.r.t. the fairness-utility trade-off and w.r.t. both group and individual fairness metrics. This work presents a fresh perspective on the trade-off between utility and algorithmic fairness and opens a key avenue for the potential of using prediction uncertainty in fair machine learning." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Mcal,\cite{Mcal},Synthetic quantitative MRI through relaxometry modelling,,,True,False,"Callaghan, Martina F. and Mohammadi, Siawoosh and Weiskopf, Nikolaus",2016.0,,https://dx.doi.org/10.1002/nbm.3658,10.1002/nbm.3658,NMR in Biomedicine,Synthetic quantitative MRI through relaxometry modelling,Synthetic quantitative MRI through relaxometry modelling - PMC,https://pmc.ncbi.nlm.nih.gov/articles/PMC5132086/,The proposed synthetic qMRI approach shows promise for furthering our understanding of the inter‐relation of MRI parameters and for maximizing "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Jand,\cite{Jand},Synthetic MRI for stroke: a qualitative and quantitative pilot study,,,True,False,"André, Joachim and Barrit, Sami and Jissendi, Patrice",2022.0,,,10.1038/s41598-022-15204-8,Scientific Reports,Synthetic MRI for stroke: a qualitative and quantitative pilot study,(PDF) Synthetic MRI for stroke: a qualitative and quantitative pilot study,https://www.researchgate.net/publication/361826097_Synthetic_MRI_for_stroke_a_qualitative_and_quantitative_pilot_study,Synthetic MR provides qualitative and quantitative multi-parametric data about tissue properties. in a single acquisition. Its use in stroke imaging is not "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Emoy,\cite{Emoy},A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data,,,True,False,"Moya-Sáez, Elisa and Peña-Nogales, Óscar and Luis-García, Rodrigo de and Alberola-López, Carlos",2021.0,,https://www.sciencedirect.com/science/article/pii/S0169260721004454,https://doi.org/10.1016/j.cmpb.2021.106371,Computer Methods and Programs in Biomedicine,A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data,A deep learning approach for synthetic MRI based on two routine ...,https://pubmed.ncbi.nlm.nih.gov/34525411/,"**Conclusions:** These results show that our approach is able to provide realistic parametric maps and weighted images out of a CNN that (a) is trained with a synthetic dataset and (b) needs only two inputs, which are in turn obtained from a common full-brain acquisition that takes less than 8 min of scan time. * Brain tumor enhancement prediction from pre-contrast conventional weighted images using synthetic multiparametric mapping and generative artificial intelligence.Moya-Sáez E, de Luis-García R, Nunez-Gonzalez L, Alberola-López C, Hernández-Tamames JA.Moya-Sáez E, et al.Quant Imaging Med Surg. * Deep-Learning-Based Contrast Synthesis From MRF Parameter Maps in the Knee Joint.Nykänen O, Nevalainen M, Casula V, Isosalo A, Inkinen SI, Nikki M, Lattanzi R, Cloos MA, Nissi MJ, Nieminen MT.Nykänen O, et al.J Magn Reson Imaging." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Kgop,\cite{Kgop},"Synthetic data in generalizable, learning-based neuroimaging",,,True,False,"Gopinath, Karthik and Hoopes, Andrew and Alexander, Daniel C. and Arnold, Steven E. and Balbastre, Yael and Billot, Benjamin and Casamitjana, Adrià and Cheng, You and Chua, Russ Yue Zhi and Edlow, Brian L. and Fischl, Bruce and Gazula, Harshvardhan and Hoffmann, Malte and Keene, C. Dirk and Kim, Seunghoi and Kimberly, W. Taylor and Laguna, Sonia and Larson, Kathleen E. and Van Leemput, Koen and Puonti, Oula and Rodrigues, Livia M. and Rosen, Matthew S. and Tregidgo, Henry F. J. and Varadarajan, Divya and Young, Sean I. and Dalca, Adrian V. and Iglesias, Juan Eugenio",2024.0,11,https://doi.org/10.1162/imag\_a\_00337,10.1162/imag_a_00337,Imaging Neuroscience,"Synthetic data in generalizable, learning-based neuroimaging","Synthetic data in generalizable, learning-based ...",https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00337/124867/Synthetic-data-in-generalizable-learning-based,"by K Gopinath · 2024 · Cited by 17 — Synthetic data have emerged as an attractive option for developing machine-learning methods in human neuroimaging, particularly in magnetic resonance imaging (" "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Jigl,\cite{Jigl},SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry,,,True,False,Juan E. Iglesias and Benjamin Billot and Yaël Balbastre and Colin Magdamo and Steven E. Arnold and Sudeshna Das and Brian L. Edlow and Daniel C. Alexander and Polina Golland and Bruce Fischl,2023.0,,https://www.science.org/doi/abs/10.1126/sciadv.add3607,10.1126/sciadv.add3607,Science Advances,SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry,SynthSR: A public AI tool to turn heterogeneous clinical brain scans ...,https://pubmed.ncbi.nlm.nih.gov/36724222/,Missing: 04/08/2025 "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,jwil,\cite{jwil},Limits of Transfer Learning,http://arxiv.org/abs/2006.12694v1,"Transfer learning involves taking information and insight from one problem domain and applying it to a new problem domain. Although widely used in practice, theory for transfer learning remains less well-developed. To address this, we prove several novel results related to transfer learning, showing the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems. Furthermore, we prove how the degree of probabilistic change in an algorithm using transfer learning places an upper bound on the amount of improvement possible. These results build on the algorithmic search framework for machine learning, allowing the results to apply to a wide range of learning problems using transfer.",True,True,Jake Williams and Abel Tadesse and Tyler Sam and Huey Sun and George D. Montanez,2020.0,,https://arxiv.org/abs/2006.12694,,,Limits of Transfer Learning,Limits of Transfer Learning,http://arxiv.org/pdf/2006.12694v1,"Transfer learning involves taking information and insight from one problem domain and applying it to a new problem domain. Although widely used in practice, theory for transfer learning remains less well-developed. To address this, we prove several novel results related to transfer learning, showing the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems. Furthermore, we prove how the degree of probabilistic change in an algorithm using transfer learning places an upper bound on the amount of improvement possible. These results build on the algorithmic search framework for machine learning, allowing the results to apply to a wide range of learning problems using transfer." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,weli,\cite{weli},Detecting Alzheimer's Disease on Small Dataset: A Knowledge Transfer Perspective,,,True,False,"Li, Wei and Zhao, Yifei and Chen, Xi and Xiao, Yang and Qin, Yuanyuan",2019.0,,,10.1109/JBHI.2018.2839771,IEEE Journal of Biomedical and Health Informatics,Detecting Alzheimer's Disease on Small Dataset: A Knowledge Transfer Perspective,Detecting Alzheimer's Disease on Small Dataset,http://ieeexplore.ieee.org/document/8362917/,"In addition, we proposed an effective knowledge transfer method to diminish the disparity among different datasets and improve the" "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,jval,\cite{jval},"Transfer Learning in Magnetic Resonance Brain Imaging: a Systematic Review",http://arxiv.org/abs/2102.01530v2,"Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In MRI, transfer learning is important for developing strategies that address the variation in MR images. Additionally, transfer learning is beneficial to re-utilize machine learning models that were trained to solve related tasks to the task of interest. Our goal is to identify research directions, gaps of knowledge, applications, and widely used strategies among the transfer learning approaches applied in MR brain imaging. We performed a systematic literature search for articles that applied transfer learning to MR brain imaging. We screened 433 studies and we categorized and extracted relevant information, including task type, application, and machine learning methods. Furthermore, we closely examined brain MRI-specific transfer learning approaches and other methods that tackled privacy, unseen target domains, and unlabeled data. We found 129 articles that applied transfer learning to brain MRI tasks. The most frequent applications were dementia related classification tasks and brain tumor segmentation. A majority of articles utilized transfer learning on convolutional neural networks (CNNs). Only few approaches were clearly brain MRI specific, considered privacy issues, unseen target domains or unlabeled data. We proposed a new categorization to group specific, widely-used approaches. There is an increasing interest in transfer learning within brain MRI. Public datasets have contributed to the popularity of Alzheimer's diagnostics/prognostics and tumor segmentation. Likewise, the availability of pretrained CNNs has promoted their utilization. Finally, the majority of the surveyed studies did not examine in detail the interpretation of their strategies after applying transfer learning, and did not compare to other approaches.",True,True,"Valverde, Juan Miguel and Imani, Vandad and Abdollahzadeh, Ali and De Feo, Riccardo and Prakash, Mithilesh and Ciszek, Robert and Tohka, Jussi",2021.0,,http://dx.doi.org/10.3390/jimaging7040066,10.3390/jimaging7040066,Journal of Imaging,"Transfer Learning in Magnetic Resonance Brain Imaging: a Systematic Review",Transfer Learning in Magnetic Resonance Brain Imaging,https://www.researchgate.net/publication/350576269_Transfer_Learning_in_Magnetic_Resonance_Brain_Imaging_A_Systematic_Review,"The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches" "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,smat,\cite{smat},Employing deep learning and transfer learning for accurate brain tumor detection,,,True,False,"Mathivanan, Sandeep Kumar and Sonaimuthu, Sridevi and Murugesan, Sankar and Rajadurai, Hariharan and Shivahare, Basu Dev and Shah, Mohd Asif",2024.0,,,10.1038/s41598-024-57970-7,Scientific Reports,Employing deep learning and transfer learning for accurate brain tumor detection,(PDF) Employing deep learning and transfer learning for accurate ...,https://www.researchgate.net/publication/379337705_Employing_deep_learning_and_transfer_learning_for_accurate_brain_tumor_detection,This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Vtha,\cite{Vtha},SinGAN-Seg: Synthetic training data generation for medical image segmentation,,,True,False,"Thambawita, Vajira AND Salehi, Pegah AND Sheshkal, Sajad Amouei AND Hicks, Steven A. AND Hammer, Hugo L. AND Parasa, Sravanthi AND Lange, Thomas de AND Halvorsen, Pål AND Riegler, Michael A.",2022.0,05,https://doi.org/10.1371/journal.pone.0267976,10.1371/journal.pone.0267976,PLOS ONE,SinGAN-Seg: Synthetic training data generation for medical image segmentation,SinGAN-Seg: Synthetic training data generation for medical image segmentation,http://arxiv.org/pdf/2107.00471v2,"Analyzing medical data to find abnormalities is a time-consuming and costly task, particularly for rare abnormalities, requiring tremendous efforts from medical experts. Artificial intelligence has become a popular tool for the automatic processing of medical data, acting as a supportive tool for doctors. However, the machine learning models used to build these tools are highly dependent on the data used to train them. Large amounts of data can be difficult to obtain in medicine due to privacy, expensive and time-consuming annotations, and a general lack of data samples for infrequent lesions. Here, we present a novel synthetic data generation pipeline, called SinGAN-Seg, to produce synthetic medical images with corresponding masks using a single training image. Our method is different from the traditional GANs because our model needs only a single image and the corresponding ground truth to train. Our method produces alternative artificial segmentation datasets with ground truth masks when real datasets are not allowed to share. The pipeline is evaluated using qualitative and quantitative comparisons between real and synthetic data to show that the style transfer technique used in our pipeline significantly improves the quality of the generated data and our method is better than other state-of-the-art GANs to prepare synthetic images when the size of training datasets are limited. By training UNet++ using both real and the synthetic data generated from the SinGAN-Seg pipeline, we show that models trained with synthetic data have very close performances to those trained on real data when the datasets have a considerable amount of data. In contrast, Synthetic data generated from the SinGAN-Seg pipeline can improve the performance of segmentation models when training datasets do not have a considerable amount of data. The code is available on GitHub." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Awah,\cite{Awah},"CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection",http://arxiv.org/abs/2103.05094v1,"Coronavirus (COVID-19) is a viral disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The spread of COVID-19 seems to have a detrimental effect on the global economy and health. A positive chest X-ray of infected patients is a crucial step in the battle against COVID-19. Early results suggest that abnormalities exist in chest X-rays of patients suggestive of COVID-19. This has led to the introduction of a variety of deep learning systems and studies have shown that the accuracy of COVID-19 patient detection through the use of chest X-rays is strongly optimistic. Deep learning networks like convolutional neural networks (CNNs) need a substantial amount of training data. Because the outbreak is recent, it is difficult to gather a significant number of radiographic images in such a short time. Therefore, in this research, we present a method to generate synthetic chest X-ray (CXR) images by developing an Auxiliary Classifier Generative Adversarial Network (ACGAN) based model called CovidGAN. In addition, we demonstrate that the synthetic images produced from CovidGAN can be utilized to enhance the performance of CNN for COVID-19 detection. Classification using CNN alone yielded 85% accuracy. By adding synthetic images produced by CovidGAN, the accuracy increased to 95%. We hope this method will speed up COVID-19 detection and lead to more robust systems of radiology.",True,True,"Waheed, Abdul and Goyal, Muskan and Gupta, Deepak and Khanna, Ashish and Al-Turjman, Fadi and Pinheiro, Plácido Rogerio",2020.0,,,10.1109/ACCESS.2020.2994762,IEEE Access,"CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection",(PDF) CovidGAN: Data Augmentation using Auxiliary Classifier GAN ...,https://www.researchgate.net/publication/341401062_CovidGAN_Data_Augmentation_using_Auxiliary_Classifier_GAN_for_Improved_Covid-19_Detection,"By adding synthetic images produced by CovidGAN, the accuracy increased to 95%. We hope this method will speed up COVID-19 detection and lead to" "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Bahm,\cite{Bahm},Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks,,,True,False,"Ahmad, Bilal and Sun, Jun and You, Qi and Palade, Vasile and Mao, Zhongjie",2022.0,,https://www.mdpi.com/2227-9059/10/2/223,,Biomedicines,Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks,(PDF) Brain Tumor Classification Using a Combination of Variational ...,https://www.researchgate.net/publication/358017457_Brain_Tumor_Classification_Using_a_Combination_of_Variational_Autoencoders_and_Generative_Adversarial_Networks,This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Hzha,\cite{Hzha},QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps,,,True,False,"Zhang, Hang and Nguyen, Thanh D. and Zhang, Jinwei and Marcille, Melanie and Spincemaille, Pascal and Wang, Yi and Gauthier, Susan A. and Sweeney, Elizabeth M.",2022.0,,https://www.sciencedirect.com/science/article/pii/S2213158222000444,https://doi.org/10.1016/j.nicl.2022.102979,NeuroImage: Clinical,QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps,QSMRim-Net: Imbalance-aware learning for identification of chronic ...,https://pubmed.ncbi.nlm.nih.gov/35247730/,"QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps - PubMed We present QSMRim-Net, a data imbalance-aware deep neural network that fuses lesion-level radiomic and convolutional image features for automated identification of rim + lesions on QSM. * Fully automated detection of paramagnetic rims in multiple sclerosis lesions on 3T susceptibility-based MR imaging.Lou C, Sati P, Absinta M, Clark K, Dworkin JD, Valcarcel AM, Schindler MK, Reich DS, Sweeney EM, Shinohara RT.Lou C, et al.Neuroimage Clin. * Quantitative susceptibility mapping versus phase imaging to identify multiple sclerosis iron rim lesions with demyelination.Huang W, Sweeney EM, Kaunzner UW, Wang Y, Gauthier SA, Nguyen TD.Huang W, et al.J Neuroimaging." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Ddab,\cite{Ddab},DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data,http://arxiv.org/abs/2105.02340v1,"Despite over two decades of progress, imbalanced data is still considered a significant challenge for contemporary machine learning models. Modern advances in deep learning have magnified the importance of the imbalanced data problem. The two main approaches to address this issue are based on loss function modifications and instance resampling. Instance sampling is typically based on Generative Adversarial Networks (GANs), which may suffer from mode collapse. Therefore, there is a need for an oversampling method that is specifically tailored to deep learning models, can work on raw images while preserving their properties, and is capable of generating high quality, artificial images that can enhance minority classes and balance the training set. We propose DeepSMOTE - a novel oversampling algorithm for deep learning models. It is simple, yet effective in its design. It consists of three major components: (i) an encoder/decoder framework; (ii) SMOTE-based oversampling; and (iii) a dedicated loss function that is enhanced with a penalty term. An important advantage of DeepSMOTE over GAN-based oversampling is that DeepSMOTE does not require a discriminator, and it generates high-quality artificial images that are both information-rich and suitable for visual inspection. DeepSMOTE code is publicly available at: https://github.com/dd1github/DeepSMOTE",True,True,Damien Dablain and Bartosz Krawczyk and Nitesh V. Chawla,2021.0,,https://arxiv.org/abs/2105.02340,,,DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data,DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data,http://arxiv.org/pdf/2105.02340v1,"Despite over two decades of progress, imbalanced data is still considered a significant challenge for contemporary machine learning models. Modern advances in deep learning have magnified the importance of the imbalanced data problem. The two main approaches to address this issue are based on loss function modifications and instance resampling. Instance sampling is typically based on Generative Adversarial Networks (GANs), which may suffer from mode collapse. Therefore, there is a need for an oversampling method that is specifically tailored to deep learning models, can work on raw images while preserving their properties, and is capable of generating high quality, artificial images that can enhance minority classes and balance the training set. We propose DeepSMOTE - a novel oversampling algorithm for deep learning models. It is simple, yet effective in its design. It consists of three major components: (i) an encoder/decoder framework; (ii) SMOTE-based oversampling; and (iii) a dedicated loss function that is enhanced with a penalty term. An important advantage of DeepSMOTE over GAN-based oversampling is that DeepSMOTE does not require a discriminator, and it generates high-quality artificial images that are both information-rich and suitable for visual inspection. DeepSMOTE code is publicly available at: https://github.com/dd1github/DeepSMOTE" "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Msal,\cite{Msal},"Multiple Sclerosis Lesion Synthesis in MRI using an encoder-decoder U-NET",http://arxiv.org/abs/1901.05733v1,"In this paper, we propose generating synthetic multiple sclerosis (MS) lesions on MRI images with the final aim to improve the performance of supervised machine learning algorithms, therefore avoiding the problem of the lack of available ground truth. We propose a two-input two-output fully convolutional neural network model for MS lesion synthesis in MRI images. The lesion information is encoded as discrete binary intensity level masks passed to the model and stacked with the input images. The model is trained end-to-end without the need for manually annotating the lesions in the training set. We then perform the generation of synthetic lesions on healthy images via registration of patient images, which are subsequently used for data augmentation to increase the performance for supervised MS lesion detection algorithms. Our pipeline is evaluated on MS patient data from an in-house clinical dataset and the public ISBI2015 challenge dataset. The evaluation is based on measuring the similarities between the real and the synthetic images as well as in terms of lesion detection performance by segmenting both the original and synthetic images individually using a state-of-the-art segmentation framework. We also demonstrate the usage of synthetic MS lesions generated on healthy images as data augmentation. We analyze a scenario of limited training data (one-image training) to demonstrate the effect of the data augmentation on both datasets. Our results significantly show the effectiveness of the usage of synthetic MS lesion images. For the ISBI2015 challenge, our one-image model trained using only a single image plus the synthetic data augmentation strategy showed a performance similar to that of other CNN methods that were fully trained using the entire training set, yielding a comparable human expert rater performance",True,True,"Salem, Mostafa and Valverde, Sergi and Cabezas, Mariano and Pareto, Deborah and Oliver, Arnau and Salvi, Joaquim and Rovira, Àlex and Lladó, Xavier",2019.0,,,10.1109/ACCESS.2019.2900198,IEEE Access,"Multiple Sclerosis Lesion Synthesis in MRI using an encoder-decoder U-NET",(PDF) Multiple Sclerosis Lesion Synthesis in MRI using an encoder ...,https://www.researchgate.net/publication/331238531_Multiple_Sclerosis_Lesion_Synthesis_in_MRI_using_an_encoder-decoder_U-NET,"In this paper, we propose generating synthetic multiple sclerosis (MS) lesions on MRI images with the final aim to improve the performance of supervised machine" "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Igoo,\cite{Igoo},Generative Adversarial Networks,http://arxiv.org/abs/1406.2661v1,"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",True,True,Ian J. Goodfellow and Jean Pouget-Abadie and Mehdi Mirza and Bing Xu and David Warde-Farley and Sherjil Ozair and Aaron Courville and Yoshua Bengio,2014.0,,https://arxiv.org/abs/1406.2661,,,Generative Adversarial Networks,Generative Adversarial Networks,http://arxiv.org/pdf/1406.2661v1,"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Wxia,\cite{Wxia},GAN Inversion: A Survey,http://arxiv.org/abs/2101.05278v5,"GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model, for the image to be faithfully reconstructed from the inverted code by the generator. As an emerging technique to bridge the real and fake image domains, GAN inversion plays an essential role in enabling the pretrained GAN models such as StyleGAN and BigGAN to be used for real image editing applications. Meanwhile, GAN inversion also provides insights on the interpretation of GAN's latent space and how the realistic images can be generated. In this paper, we provide an overview of GAN inversion with a focus on its recent algorithms and applications. We cover important techniques of GAN inversion and their applications to image restoration and image manipulation. We further elaborate on some trends and challenges for future directions.",True,True,Weihao Xia and Yulun Zhang and Yujiu Yang and Jing-Hao Xue and Bolei Zhou and Ming-Hsuan Yang,2022.0,,https://arxiv.org/abs/2101.05278,,,GAN Inversion: A Survey,GAN Inversion: A Survey,http://arxiv.org/pdf/2101.05278v5,"GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model, for the image to be faithfully reconstructed from the inverted code by the generator. As an emerging technique to bridge the real and fake image domains, GAN inversion plays an essential role in enabling the pretrained GAN models such as StyleGAN and BigGAN to be used for real image editing applications. Meanwhile, GAN inversion also provides insights on the interpretation of GAN's latent space and how the realistic images can be generated. In this paper, we provide an overview of GAN inversion with a focus on its recent algorithms and applications. We cover important techniques of GAN inversion and their applications to image restoration and image manipulation. We further elaborate on some trends and challenges for future directions." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Mmir,\cite{Mmir},Conditional Generative Adversarial Nets,http://arxiv.org/abs/1411.1784v1,"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",True,True,"Mehdi Mirza and Simon Osindero",2014.0,,http://arxiv.org/abs/1411.1784,,CoRR,Conditional Generative Adversarial Nets,Conditional Generative Adversarial Nets,http://arxiv.org/pdf/1411.1784v1,"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Kthe,\cite{Kthe},Robustness of Conditional GANs to Noisy Labels,http://arxiv.org/abs/1811.03205v1,"We study the problem of learning conditional generators from noisy labeled samples, where the labels are corrupted by random noise. A standard training of conditional GANs will not only produce samples with wrong labels, but also generate poor quality samples. We consider two scenarios, depending on whether the noise model is known or not. When the distribution of the noise is known, we introduce a novel architecture which we call Robust Conditional GAN (RCGAN). The main idea is to corrupt the label of the generated sample before feeding to the adversarial discriminator, forcing the generator to produce samples with clean labels. This approach of passing through a matching noisy channel is justified by corresponding multiplicative approximation bounds between the loss of the RCGAN and the distance between the clean real distribution and the generator distribution. This shows that the proposed approach is robust, when used with a carefully chosen discriminator architecture, known as projection discriminator. When the distribution of the noise is not known, we provide an extension of our architecture, which we call RCGAN-U, that learns the noise model simultaneously while training the generator. We show experimentally on MNIST and CIFAR-10 datasets that both the approaches consistently improve upon baseline approaches, and RCGAN-U closely matches the performance of RCGAN.",True,True,Kiran Koshy Thekumparampil and Ashish Khetan and Zinan Lin and Sewoong Oh,2018.0,,https://arxiv.org/abs/1811.03205,,,Robustness of Conditional GANs to Noisy Labels,Robustness of Conditional GANs to Noisy Labels,http://arxiv.org/pdf/1811.03205v1,"We study the problem of learning conditional generators from noisy labeled samples, where the labels are corrupted by random noise. A standard training of conditional GANs will not only produce samples with wrong labels, but also generate poor quality samples. We consider two scenarios, depending on whether the noise model is known or not. When the distribution of the noise is known, we introduce a novel architecture which we call Robust Conditional GAN (RCGAN). The main idea is to corrupt the label of the generated sample before feeding to the adversarial discriminator, forcing the generator to produce samples with clean labels. This approach of passing through a matching noisy channel is justified by corresponding multiplicative approximation bounds between the loss of the RCGAN and the distance between the clean real distribution and the generator distribution. This shows that the proposed approach is robust, when used with a carefully chosen discriminator architecture, known as projection discriminator. When the distribution of the noise is not known, we provide an extension of our architecture, which we call RCGAN-U, that learns the noise model simultaneously while training the generator. We show experimentally on MNIST and CIFAR-10 datasets that both the approaches consistently improve upon baseline approaches, and RCGAN-U closely matches the performance of RCGAN." "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Wehua,\cite{Wehua},"Correcting Noisy Multilabel Predictions: Modeling Label Noise through Latent Space Shifts",http://arxiv.org/abs/2502.14281v3,"Noise in data appears to be inevitable in most real-world machine learning applications and would cause severe overfitting problems. Not only can data features contain noise, but labels are also prone to be noisy due to human input. In this paper, rather than noisy label learning in multiclass classifications, we instead focus on the less explored area of noisy label learning for multilabel classifications. Specifically, we investigate the post-correction of predictions generated from classifiers learned with noisy labels. The reasons are two-fold. Firstly, this approach can directly work with the trained models to save computational resources. Secondly, it could be applied on top of other noisy label correction techniques to achieve further improvements. To handle this problem, we appeal to deep generative approaches that are possible for uncertainty estimation. Our model posits that label noise arises from a stochastic shift in the latent variable, providing a more robust and beneficial means for noisy learning. We develop both unsupervised and semi-supervised learning methods for our model. The extensive empirical study presents solid evidence to that our approach is able to consistently improve the independent models and performs better than a number of existing methods across various noisy label settings. Moreover, a comprehensive empirical analysis of the proposed method is carried out to validate its robustness, including sensitivity analysis and an ablation study, among other elements.",True,True,Weipeng Huang and Qin Li and Yang Xiao and Cheng Qiao and Tie Cai and Junwei Liao and Neil J. Hurley and Guangyuan Piao,2025.0,,https://arxiv.org/abs/2502.14281,,,"Correcting Noisy Multilabel Predictions: Modeling Label Noise through Latent Space Shifts",[PDF] Correcting Noisy Multilabel Predictions: Modeling Label Noise ...,http://arxiv.org/pdf/2502.14281,"Once the shifted latent variable still locates in the right latent space, the generated label noise will also follow the pattern. (in particular" "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Hbae,\cite{Hbae},"From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model",http://arxiv.org/abs/2205.00690v3,"Noisy labels are inevitable yet problematic in machine learning society. It ruins the generalization of a classifier by making the classifier over-fitted to noisy labels. Existing methods on noisy label have focused on modifying the classifier during the training procedure. It has two potential problems. First, these methods are not applicable to a pre-trained classifier without further access to training. Second, it is not easy to train a classifier and regularize all negative effects from noisy labels, simultaneously. We suggest a new branch of method, Noisy Prediction Calibration (NPC) in learning with noisy labels. Through the introduction and estimation of a new type of transition matrix via generative model, NPC corrects the noisy prediction from the pre-trained classifier to the true label as a post-processing scheme. We prove that NPC theoretically aligns with the transition matrix based methods. Yet, NPC empirically provides more accurate pathway to estimate true label, even without involvement in classifier learning. Also, NPC is applicable to any classifier trained with noisy label methods, if training instances and its predictions are available. Our method, NPC, boosts the classification performances of all baseline models on both synthetic and real-world datasets. The implemented code is available at https://github.com/BaeHeeSun/NPC.",True,True,HeeSun Bae and Seungjae Shin and Byeonghu Na and JoonHo Jang and Kyungwoo Song and Il-Chul Moon,2022.0,,https://arxiv.org/abs/2205.00690,,,"From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model",[PDF] Noisy Prediction Calibration via Generative Model,https://icml.cc/media/icml-2022/Slides/18350_oZIPQgX.pdf,NPC models the relation between output of a classifier and the true label via generative model. NPC consistently boosts the classification performances of pre- "Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis",2505.23353v1,Vkel,\cite{Vkel},"Prior Image-Constrained Reconstruction using Style-Based Generative Models",http://arxiv.org/abs/2102.12525v2,"Obtaining a useful estimate of an object from highly incomplete imaging measurements remains a holy grail of imaging science. Deep learning methods have shown promise in learning object priors or constraints to improve the conditioning of an ill-posed imaging inverse problem. In this study, a framework for estimating an object of interest that is semantically related to a known prior image, is proposed. An optimization problem is formulated in the disentangled latent space of a style-based generative model, and semantically meaningful constraints are imposed using the disentangled latent representation of the prior image. Stable recovery from incomplete measurements with the help of a prior image is theoretically analyzed. Numerical experiments demonstrating the superior performance of our approach as compared to related methods are presented.",True,True,"Kelkar, Varun A and Anastasio, Mark",2021.0,18--24 Jul,https://proceedings.mlr.press/v139/kelkar21a.html,,,"Prior Image-Constrained Reconstruction using Style-Based Generative Models",Prior Image-Constrained Reconstruction using Style-Based ...,http://proceedings.mlr.press/v139/kelkar21a/kelkar21a.pdf,"by VA Kelkar · 2021 · Cited by 33 — Style-based generative models have been known to be able to control individual semantic features, or styles, in an image by varying the disentangled. Page 2" Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,bengio2009curriculum,\cite{bengio2009curriculum},Curriculum learning,,,True,False,"Bengio, Yoshua and Louradour, J\'{e}r\^{o}me and Collobert, Ronan and Weston, Jason",2009.0,,https://doi.org/10.1145/1553374.1553380,10.1145/1553374.1553380,,Curriculum learning,Curriculum learning,https://en.wikipedia.org/wiki/Curriculum_learning,Curriculum learning is a technique in machine learning in which a model is trained on examples of increasing difficulty. Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,cl_survey,\cite{cl_survey},Curriculum Learning: A Survey,http://arxiv.org/abs/2101.10382v3,"Training machine learning models in a meaningful order, from the easy samples to the hard ones, using curriculum learning can provide performance improvements over the standard training approach based on random data shuffling, without any additional computational costs. Curriculum learning strategies have been successfully employed in all areas of machine learning, in a wide range of tasks. However, the necessity of finding a way to rank the samples from easy to hard, as well as the right pacing function for introducing more difficult data can limit the usage of the curriculum approaches. In this survey, we show how these limits have been tackled in the literature, and we present different curriculum learning instantiations for various tasks in machine learning. We construct a multi-perspective taxonomy of curriculum learning approaches by hand, considering various classification criteria. We further build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm, linking the discovered clusters with our taxonomy. At the end, we provide some interesting directions for future work.",True,True,Petru Soviany and Radu Tudor Ionescu and Paolo Rota and Nicu Sebe,2022.0,,https://arxiv.org/abs/2101.10382,,,Curriculum Learning: A Survey,Curriculum Learning: A Survey,http://arxiv.org/pdf/2101.10382v3,"Training machine learning models in a meaningful order, from the easy samples to the hard ones, using curriculum learning can provide performance improvements over the standard training approach based on random data shuffling, without any additional computational costs. Curriculum learning strategies have been successfully employed in all areas of machine learning, in a wide range of tasks. However, the necessity of finding a way to rank the samples from easy to hard, as well as the right pacing function for introducing more difficult data can limit the usage of the curriculum approaches. In this survey, we show how these limits have been tackled in the literature, and we present different curriculum learning instantiations for various tasks in machine learning. We construct a multi-perspective taxonomy of curriculum learning approaches by hand, considering various classification criteria. We further build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm, linking the discovered clusters with our taxonomy. At the end, we provide some interesting directions for future work." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,cl_nlu,\cite{cl_nlu},Curriculum Learning for Natural Language Understanding,,,True,False,"Xu, Benfeng and Zhang, Licheng and Mao, Zhendong and Wang, Quan and Xie, Hongtao and Zhang, Yongdong",2020.0,,https://aclanthology.org/2020.acl-main.542,10.18653/v1/2020.acl-main.542,,Curriculum Learning for Natural Language Understanding,[PDF] Curriculum Learning for Natural Language Understanding - Digie,https://api.digie.ai/publications/Curriculum-Learning-for-NLU.pdf,"Natural Language Understanding (NLU), which re- quires machines to understand and reason with hu- man language, is a crucial yet challenging problem. Recently," Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,cl_bert,\cite{cl_bert},Pre-training a {BERT} with Curriculum Learning by Increasing Block-Size of Input Text,,,True,False,"Nagatsuka, Koichi and Broni-Bediako, Clifford and Atsumi, Masayasu",2021.0,,https://aclanthology.org/2021.ranlp-1.112,,,Pre-training a {BERT} with Curriculum Learning by Increasing Block-Size of Input Text,Pre-training a BERT with Curriculum Learning by Increasing Block ...,https://aclanthology.org/2021.ranlp-1.112/,We propose a new CL method which gradually increases the block-size of input text for training the self-attention mechanism of BERT and its variants. Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,bert_lrc,\cite{bert_lrc},Modeling Easiness for Training Transformers with Curriculum Learning,,,True,False,"Ranaldi, Leonardo and Pucci, Giulia and Zanzotto, Fabio Massimo",2023.0,,https://aclanthology.org/2023.ranlp-1.101,,,Modeling Easiness for Training Transformers with Curriculum Learning,Modeling Easiness for Training Transformers with Curriculum ...,https://aclanthology.org/2023.ranlp-1.101/,"In this paper, building on Curriculum Learning, we propose a novel, linguistically motivated measure to determine example complexity for organizing examples" Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,orca,\cite{orca},Orca: Progressive Learning from Complex Explanation Traces of GPT-4,http://arxiv.org/abs/2306.02707v1,"Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model's capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca (We are working with our legal team to publicly release a diff of the model weights in accordance with LLaMA's release policy to be published at https://aka.ms/orca-lm), a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills.",True,True,Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah,2023.0,,https://arxiv.org/abs/2306.02707,,,Orca: Progressive Learning from Complex Explanation Traces of GPT-4,Orca: Progressive Learning from Complex Explanation Traces of GPT-4,http://arxiv.org/pdf/2306.02707v1,"Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model's capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca (We are working with our legal team to publicly release a diff of the model weights in accordance with LLaMA's release policy to be published at https://aka.ms/orca-lm), a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,curr_instr,\cite{curr_instr},Instruction Tuning with Human Curriculum,http://arxiv.org/abs/2310.09518v4,"In this work, we (1) introduce Curriculum Instruction Tuning, (2) explore the potential advantages of employing diverse curriculum strategies, and (3) delineate a synthetic instruction-response generation framework that complements our theoretical approach. Distinct from the existing instruction tuning dataset, our generation pipeline is systematically structured to emulate the sequential and orderly characteristic of human learning. Additionally, we describe a methodology for generating instruction-response datasets that extensively span the various stages of human education, from middle school through the graduate level, utilizing educational subject catalogs. Before training, we meticulously organize the instruction data to ensure that questions escalate in difficulty regarding (A) the subject matter and (B) the intricacy of the instructions. The findings of our study reveal that substantial improvements in performance can be achieved through the mere application of curriculum ordering to instruction data (achieving gains of +4.76 on TruthfulQA, +2.98 on MMLU, +2.8 on OpenbookQA, and +1.28 on ARC-hard) compared to random shuffling. This enhancement is achieved without incurring additional computational expenses. Through comprehensive experimentation, we observe that the advantages of our proposed method are consistently evident across nine benchmarks.",True,True,"Lee, Bruce W and Cho, Hyunsoo and Yoo, Kang Min",2024.0,,https://aclanthology.org/2024.findings-naacl.82,10.18653/v1/2024.findings-naacl.82,,Instruction Tuning with Human Curriculum,Instruction Tuning with Human Curriculum,http://arxiv.org/pdf/2310.09518v4,"In this work, we (1) introduce Curriculum Instruction Tuning, (2) explore the potential advantages of employing diverse curriculum strategies, and (3) delineate a synthetic instruction-response generation framework that complements our theoretical approach. Distinct from the existing instruction tuning dataset, our generation pipeline is systematically structured to emulate the sequential and orderly characteristic of human learning. Additionally, we describe a methodology for generating instruction-response datasets that extensively span the various stages of human education, from middle school through the graduate level, utilizing educational subject catalogs. Before training, we meticulously organize the instruction data to ensure that questions escalate in difficulty regarding (A) the subject matter and (B) the intricacy of the instructions. The findings of our study reveal that substantial improvements in performance can be achieved through the mere application of curriculum ordering to instruction data (achieving gains of +4.76 on TruthfulQA, +2.98 on MMLU, +2.8 on OpenbookQA, and +1.28 on ARC-hard) compared to random shuffling. This enhancement is achieved without incurring additional computational expenses. Through comprehensive experimentation, we observe that the advantages of our proposed method are consistently evident across nine benchmarks." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,feng2024,\cite{feng2024},"Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining",http://arxiv.org/abs/2412.15285v1,"Pretraining large language models effectively requires strategic data selection, blending and ordering. However, key details about data mixtures especially their scalability to longer token horizons and larger model sizes remain underexplored due to limited disclosure by model developers. To address this, we formalize the concept of two-phase pretraining and conduct an extensive systematic study on how to select and mix data to maximize model accuracies for the two phases. Our findings illustrate that a two-phase approach for pretraining outperforms random data ordering and natural distribution of tokens by 3.4% and 17% on average accuracies. We provide in-depth guidance on crafting optimal blends based on quality of the data source and the number of epochs to be seen. We propose to design blends using downsampled data at a smaller scale of 1T tokens and then demonstrate effective scaling of our approach to larger token horizon of 15T tokens and larger model size of 25B model size. These insights provide a series of steps practitioners can follow to design and scale their data blends.",True,True,Steven Feng and Shrimai Prabhumoye and Kezhi Kong and Dan Su and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro,2024.0,,https://arxiv.org/abs/2412.15285,,,"Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining",Maximize Your Data's Potential: Enhancing LLM Accuracy with Two ...,https://arxiv.org/abs/2412.15285,A two-phase approach for pretraining outperforms random data ordering and natural distribution of tokens by 3.4% and 17% on average accuracies. Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,babylm_2023,\cite{babylm_2023},Findings of the {B}aby{LM} Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora,,,True,False,"Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan",2023.0,,https://aclanthology.org/2023.conll-babylm.1,10.18653/v1/2023.conll-babylm.1,,Findings of the {B}aby{LM} Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora,Findings of the BabyLM Challenge: Sample-Efficient Pretraining on ...,https://aclanthology.org/2023.conll-babylm.1/,"The BabyLM Challenge findings focus on sample-efficient pretraining on developmentally plausible corpora, presented at the 27th Conference on Computational" Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,babylm_2024,\cite{babylm_2024},"Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora",http://arxiv.org/abs/2412.05149v1,"The BabyLM Challenge is a community effort to close the data-efficiency gap between human and computational language learners. Participants compete to optimize language model training on a fixed language data budget of 100 million words or less. This year, we released improved text corpora, as well as a vision-and-language corpus to facilitate research into cognitively plausible vision language models. Submissions were compared on evaluation tasks targeting grammatical ability, (visual) question answering, pragmatic abilities, and grounding, among other abilities. Participants could submit to a 10M-word text-only track, a 100M-word text-only track, and/or a 100M-word and image multimodal track. From 31 submissions employing diverse methods, a hybrid causal-masked language model architecture outperformed other approaches. No submissions outperformed the baselines in the multimodal track. In follow-up analyses, we found a strong relationship between training FLOPs and average performance across tasks, and that the best-performing submissions proposed changes to the training data, training objective, and model architecture. This year's BabyLM Challenge shows that there is still significant room for innovation in this setting, in particular for image-text modeling, but community-driven research can yield actionable insights about effective strategies for small-scale language modeling.",True,True,Michael Y. Hu and Aaron Mueller and Candace Ross and Adina Williams and Tal Linzen and Chengxu Zhuang and Ryan Cotterell and Leshem Choshen and Alex Warstadt and Ethan Gotlieb Wilcox,2024.0,,https://arxiv.org/abs/2412.05149,,,"Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora",[2504.08165] Findings of the BabyLM Challenge,https://arxiv.org/abs/2504.08165,"View a PDF of the paper titled Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora, by Alex Warstadt and 10 other authors From over 30 submissions, we extract concrete recommendations on how best to train data-efficient language models, and on where future efforts should (and perhaps should not) focus. View a PDF of the paper titled Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora, by Alex Warstadt and 10 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,less_is_more,\cite{less_is_more},"Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies",http://arxiv.org/abs/2410.22886v2,"Curriculum Learning has been a popular strategy to improve the cognitive plausibility of Small-Scale Language Models (SSLMs) in the BabyLM Challenge. However, it has not led to considerable improvements over non-curriculum models. We assess whether theoretical linguistic acquisition theories can be used to specify more fine-grained curriculum learning strategies, creating age-ordered corpora of Child-Directed Speech for four typologically distant language families to implement SSLMs and acquisition-inspired curricula cross-lingually. Comparing the success of three objective curricula (Growing, Inwards and MMM) that precisely replicate the predictions of acquisition theories on a standard SSLM architecture, we find fine-grained acquisition-inspired curricula can outperform non-curriculum baselines and performance benefits of curricula strategies in SSLMs can be derived by specifying fine-grained language-specific curricula that precisely replicate language acquisition theories.",True,True,Suchir Salhan and Richard Diehl Martinez and Zébulon Goriely and Paula Buttery,2024.0,,https://arxiv.org/abs/2410.22886,,,"Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies",‪Suchir Salhan‬ - ‪Google Scholar‬,https://scholar.google.com/citations?user=xOo9sisAAAAJ&hl=en,"Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies. S Salhan, RD Martinez, Z Goriely" Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,prophetnet,\cite{prophetnet},{P}rophet{N}et: Predicting Future N-gram for Sequence-to-{S}equence{P}re-training,,,True,False,"Qi, Weizhen and Yan, Yu and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming",2020.0,,https://aclanthology.org/2020.findings-emnlp.217,10.18653/v1/2020.findings-emnlp.217,,{P}rophet{N}et: Predicting Future N-gram for Sequence-to-{S}equence{P}re-training,ProphetNet: Predicting Future N-gram for Sequence-to- ...,https://arxiv.org/abs/2001.04063,"by W Qi · 2020 · Cited by 542 — This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram" Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,future_lens,\cite{future_lens},Future Lens: Anticipating Subsequent Tokens from a Single Hidden State,http://arxiv.org/abs/2311.04897v1,"We conjecture that hidden state vectors corresponding to individual input tokens encode information sufficient to accurately predict several tokens ahead. More concretely, in this paper we ask: Given a hidden (internal) representation of a single token at position $t$ in an input, can we reliably anticipate the tokens that will appear at positions $\geq t + 2$? To test this, we measure linear approximation and causal intervention methods in GPT-J-6B to evaluate the degree to which individual hidden states in the network contain signal rich enough to predict future hidden states and, ultimately, token outputs. We find that, at some layers, we can approximate a model's output with more than 48% accuracy with respect to its prediction of subsequent tokens through a single hidden state. Finally we present a ""Future Lens"" visualization that uses these methods to create a new view of transformer states.",True,True,"Pal, Koyena and Sun, Jiuding and Yuan, Andrew and Wallace, Byron and Bau, David",2023.0,,https://aclanthology.org/2023.conll-1.37,10.18653/v1/2023.conll-1.37,,Future Lens: Anticipating Subsequent Tokens from a Single Hidden State,Future Lens: Anticipating Subsequent Tokens from a Single Hidden State,http://arxiv.org/pdf/2311.04897v1,"We conjecture that hidden state vectors corresponding to individual input tokens encode information sufficient to accurately predict several tokens ahead. More concretely, in this paper we ask: Given a hidden (internal) representation of a single token at position $t$ in an input, can we reliably anticipate the tokens that will appear at positions $\geq t + 2$? To test this, we measure linear approximation and causal intervention methods in GPT-J-6B to evaluate the degree to which individual hidden states in the network contain signal rich enough to predict future hidden states and, ultimately, token outputs. We find that, at some layers, we can approximate a model's output with more than 48% accuracy with respect to its prediction of subsequent tokens through a single hidden state. Finally we present a ""Future Lens"" visualization that uses these methods to create a new view of transformer states." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,gloeckle2024mtp,\cite{gloeckle2024mtp},Better & Faster Large Language Models via Multi-token Prediction,http://arxiv.org/abs/2404.19737v1,"Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following n tokens using n independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12 % more problems on HumanEval and 17 % more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to 3 times faster at inference, even with large batch sizes.",True,True,Fabian Gloeckle and Badr Youbi Idrissi and Baptiste Rozière and David Lopez-Paz and Gabriel Synnaeve,2024.0,,https://arxiv.org/abs/2404.19737,,,Better & Faster Large Language Models via Multi-token Prediction,Better & Faster Large Language Models via Multi-token ...,https://www.reddit.com/r/LocalLLaMA/comments/1dj9xql/better_faster_large_language_models_via/,"In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,blockwise_parallel_decoding,\cite{blockwise_parallel_decoding},Blockwise Parallel Decoding for Deep Autoregressive Models,http://arxiv.org/abs/1811.03115v1,"Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding.",True,True,"Stern, Mitchell and Shazeer, Noam and Uszkoreit, Jakob",2018.0,,https://proceedings.neurips.cc/paper_files/paper/2018/file/c4127b9194fe8562c64dc0f5bf2c93bc-Paper.pdf,,,Blockwise Parallel Decoding for Deep Autoregressive Models,Blockwise Parallel Decoding for Deep Autoregressive Models,http://arxiv.org/pdf/1811.03115v1,"Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,layerskip,\cite{layerskip},{L}ayer{S}kip: Enabling Early Exit Inference and Self-Speculative Decoding,,,True,False,"Elhoushi, Mostafa and Shrivastava, Akshat and Liskovich, Diana and Hosmer, Basil and Wasti, Bram and Lai, Liangzhen and Mahmoud, Anas and Acun, Bilge and Agarwal, Saurabh and Roman, Ahmed and Aly, Ahmed and Chen, Beidi and Wu, Carole-Jean",2024.0,,https://aclanthology.org/2024.acl-long.681,10.18653/v1/2024.acl-long.681,,{L}ayer{S}kip: Enabling Early Exit Inference and Self-Speculative Decoding,Enabling Early Exit Inference and Self-Speculative Decoding,https://aclanthology.org/2024.acl-long.681/,"We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,kangaroo,\cite{kangaroo},Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting,http://arxiv.org/abs/2404.18911v1,"Speculative decoding has demonstrated its effectiveness in accelerating the inference of large language models while maintaining a consistent sampling distribution. However, the conventional approach of training a separate draft model to achieve a satisfactory token acceptance rate can be costly. Drawing inspiration from early exiting, we propose a novel self-speculative decoding framework \emph{Kangaroo}, which uses a fixed shallow sub-network as a self-draft model, with the remaining layers serving as the larger target model. We train a lightweight and efficient adapter module on top of the sub-network to bridge the gap between the sub-network and the full model's representation ability. It is noteworthy that the inference latency of the self-draft model may no longer be negligible compared to the large model, necessitating strategies to increase the token acceptance rate while minimizing the drafting steps of the small model. To address this challenge, we introduce an additional early exiting mechanism for generating draft tokens. Specifically, we halt the small model's subsequent prediction during the drafting phase once the confidence level for the current token falls below a certain threshold. Extensive experiments on the Spec-Bench demonstrate the effectiveness of Kangaroo. Under single-sequence verification, Kangaroo achieves speedups up to $1.68\times$ on Spec-Bench, outperforming Medusa-1 with 88.7\% fewer additional parameters (67M compared to 591M). The code for Kangaroo is available at https://github.com/Equationliu/Kangaroo.",True,True,Fangcheng Liu and Yehui Tang and Zhenhua Liu and Yunsheng Ni and Kai Han and Yunhe Wang,2024.0,,https://arxiv.org/abs/2404.18911,,,Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting,NeurIPS Poster Kangaroo: Lossless Self-Speculative Decoding for ...,https://neurips.cc/virtual/2024/poster/93829,"Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting However, the conventional approach of training separate draft model to achieve a satisfactory token acceptance rate can be costly and impractical. In this paper, we propose a novel self-speculative decoding framework \emph{Kangaroo} with \emph{double} early exiting strategy, which leverages the shallow sub-network and the \texttt{LM Head} of the well-trained target LLM to construct a self-drafting model. One significant challenge that comes with the proposed method is that the inference latency of the self-draft model may no longer be negligible compared to the big model. To boost the token acceptance rate while minimizing the latency of the self-drafting model, we introduce an additional \emph{early exiting} mechanism for both single-sequence and the tree decoding scenarios. NeurIPS uses cookies for essential functions only." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,draft_verify,\cite{draft_verify},"Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding",http://arxiv.org/abs/2309.08168v2,"We present a novel inference scheme, self-speculative decoding, for accelerating Large Language Models (LLMs) without the need for an auxiliary model. This approach is characterized by a two-stage process: drafting and verification. The drafting stage generates draft tokens at a slightly lower quality but more quickly, which is achieved by selectively skipping certain intermediate layers during drafting. Subsequently, the verification stage employs the original LLM to validate those draft output tokens in one forward pass. This process ensures the final output remains identical to that produced by the unaltered LLM. Moreover, the proposed method requires no additional neural network training and no extra memory footprint, making it a plug-and-play and cost-effective solution for inference acceleration. Benchmarks with LLaMA-2 and its variants demonstrated a speedup up to 1.99$\times$.",True,True,"Zhang, Jun and Wang, Jue and Li, Huan and Shou, Lidan and Chen, Ke and Chen, Gang and Mehrotra, Sharad",2024.0,,https://aclanthology.org/2024.acl-long.607,10.18653/v1/2024.acl-long.607,,"Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding",Draft & Verify: Lossless Large Language Model ...,https://aclanthology.org/2024.acl-long.607/,"by J Zhang · 2024 · Cited by 130 — We present a novel inference scheme, self-speculative decoding, for accelerating Large Language Models (LLMs) without the need for an auxiliary model." Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,swift,\cite{swift},"SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration",http://arxiv.org/abs/2410.06916v2,"Speculative decoding (SD) has emerged as a widely used paradigm to accelerate LLM inference without compromising quality. It works by first employing a compact model to draft multiple tokens efficiently and then using the target LLM to verify them in parallel. While this technique has achieved notable speedups, most existing approaches necessitate either additional parameters or extensive training to construct effective draft models, thereby restricting their applicability across different LLMs and tasks. To address this limitation, we explore a novel plug-and-play SD solution with layer-skipping, which skips intermediate layers of the target LLM as the compact draft model. Our analysis reveals that LLMs exhibit great potential for self-acceleration through layer sparsity and the task-specific nature of this sparsity. Building on these insights, we introduce SWIFT, an on-the-fly self-speculative decoding algorithm that adaptively selects intermediate layers of LLMs to skip during inference. SWIFT does not require auxiliary models or additional training, making it a plug-and-play solution for accelerating LLM inference across diverse input data streams. Our extensive experiments across a wide range of models and downstream tasks demonstrate that SWIFT can achieve over a 1.3x-1.6x speedup while preserving the original distribution of the generated text. We release our code in https://github.com/hemingkx/SWIFT.",True,True,Heming Xia and Yongqi Li and Jun Zhang and Cunxiao Du and Wenjie Li,2024.0,,https://arxiv.org/abs/2410.06916,,,"SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration",SWIFT: On-the-Fly Self-Speculative Decoding for LLM ...,https://github.com/hemingkx/SWIFT,SWIFT is an on-the-fly self-speculative decoding algorithm that adaptively selects intermediate layers of LLMs to skip during inference. Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,koala,\cite{koala},"KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning",http://arxiv.org/abs/2408.08146v1,"Large Language Models (LLMs) exhibit high inference latency due to their autoregressive decoding nature. While the draft head in speculative decoding mitigates this issue, its full potential remains unexplored. In this paper, we introduce KOALA (K-layer Optimized Adversarial Learning Architecture), an orthogonal approach to the draft head. By transforming the conventional single-layer draft head into a multi-layer architecture and incorporating adversarial learning into the traditional supervised training, KOALA significantly improves the accuracy of the draft head in predicting subsequent tokens, thus more closely mirroring the functionality of LLMs. Although this improvement comes at the cost of slightly increased drafting overhead, KOALA substantially unlocks the draft head's potential, greatly enhancing speculative decoding. We conducted comprehensive evaluations of KOALA, including both autoregressive and non-autoregressive draft heads across various tasks, demonstrating a latency speedup ratio improvement of 0.24x-0.41x, which is 10.57%-14.09% faster than the original draft heads.",True,True,Kaiqi Zhang and Jing Zhao and Rui Chen,2024.0,,https://arxiv.org/abs/2408.08146,,,"KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning",hemingkx/SpeculativeDecodingPapers: Must-read papers ... - GitHub,https://github.com/hemingkx/SpeculativeDecodingPapers,"[pdf], 2024.08. KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning Kaiqi Zhang, Jing Zhao, Rui Chen. [pdf]" Pre-Training Curriculum for Multi-Token Prediction in Language Models,2505.22757v1,medusa,\cite{medusa},"Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads",http://arxiv.org/abs/2401.10774v3,"Large Language Models (LLMs) employ auto-regressive decoding that requires sequential computation, with each step reliant on the previous one's output. This creates a bottleneck as each step necessitates moving the full model parameters from High-Bandwidth Memory (HBM) to the accelerator's cache. While methods such as speculative decoding have been suggested to address this issue, their implementation is impeded by the challenges associated with acquiring and maintaining a separate draft model. In this paper, we present Medusa, an efficient method that augments LLM inference by adding extra decoding heads to predict multiple subsequent tokens in parallel. Using a tree-based attention mechanism, Medusa constructs multiple candidate continuations and verifies them simultaneously in each decoding step. By leveraging parallel processing, Medusa substantially reduces the number of decoding steps required. We present two levels of fine-tuning procedures for Medusa to meet the needs of different use cases: Medusa-1: Medusa is directly fine-tuned on top of a frozen backbone LLM, enabling lossless inference acceleration. Medusa-2: Medusa is fine-tuned together with the backbone LLM, enabling better prediction accuracy of Medusa heads and higher speedup but needing a special training recipe that preserves the backbone model's capabilities. Moreover, we propose several extensions that improve or expand the utility of Medusa, including a self-distillation to handle situations where no training data is available and a typical acceptance scheme to boost the acceptance rate while maintaining generation quality. We evaluate Medusa on models of various sizes and training procedures. Our experiments demonstrate that Medusa-1 can achieve over 2.2x speedup without compromising generation quality, while Medusa-2 further improves the speedup to 2.3-3.6x.",True,True,Tianle Cai and Yuhong Li and Zhengyang Geng and Hongwu Peng and Jason D. Lee and Deming Chen and Tri Dao,2024.0,,https://arxiv.org/abs/2401.10774,,,"Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads",Medusa: Simple Framework for Accelerating LLM ...,https://github.com/FasterDecoding/Medusa,Medusa is a simple framework that democratizes the acceleration techniques for LLM generation with multiple decoding heads. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,lee1985determination,\cite{lee1985determination},Determination of {3D} human body postures from a single view,,,True,False,"Lee, Hsi-Jian and Chen, Zen",1985.0,,,,"Computer Vision, Graphics, and Image Processing",Determination of {3D} human body postures from a single view,Determination of 3D human body postures from a single view,https://www.sciencedirect.com/science/article/abs/pii/0734189X85900945,"In this paper a method is proposed to recover and interpret the 3D body structures of a person from a single view, provided that (1) at least six feature points on the head and a set of body joints are available on the image plane, and (2) the geometry of head and lengths of body segments formed by joints are known. 2007, Computer Vision and Image Understanding Show abstract Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. 2001, Computer Vision and Image Understanding Show abstract A comprehensive survey of computer vision-based human motion capture literature from the past two decades is presented. * ### Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image" "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,mehta2017monocular,\cite{mehta2017monocular},Monocular {3D} human pose estimation in the wild using improved cnn supervision,,,True,False,"Mehta, Dushyant and Rhodin, Helge and Casas, Dan and Fua, Pascal and Sotnychenko, Oleksandr and Xu, Weipeng and Theobalt, Christian",2017.0,,,,,Monocular {3D} human pose estimation in the wild using improved cnn supervision,Monocular 3D Human Pose Estimation In The Wild Using Improved ...,https://arxiv.org/abs/1611.09813,"Authors:Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, Christian Theobalt View a PDF of the paper titled Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision, by Dushyant Mehta and 6 other authors View a PDF of the paper titled Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision, by Dushyant Mehta and 6 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle " "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,pavlakos2017coarse,\cite{pavlakos2017coarse},Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose,http://arxiv.org/abs/1611.07828v2,"This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30% on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.",True,True,"Pavlakos, Georgios and Zhou, Xiaowei and Derpanis, Konstantinos G and Daniilidis, Kostas",2017.0,,,,,Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose,Coarse-to-Fine Volumetric Prediction for Single-Image 3D ...,https://arxiv.org/abs/1611.07828,"Image 2: arxiv logo>cs> arXiv:1611.07828 **arXiv:1611.07828** (cs) View a PDF of the paper titled Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose, by Georgios Pavlakos and 3 other authors View a PDF of the paper titled Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose, by Georgios Pavlakos and 3 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,cai2019exploiting,\cite{cai2019exploiting},Exploiting spatial-temporal relationships for {3D} pose estimation via graph convolutional networks,,,True,False,"Cai, Yujun and Ge, Liuhao and Liu, Jun and Cai, Jianfei and Cham, Tat-Jen and Yuan, Junsong and Thalmann, Nadia Magnenat",2019.0,,,,,Exploiting spatial-temporal relationships for {3D} pose estimation via graph convolutional networks,vanoracai/Exploiting-Spatial-temporal-Relationships-for- ...,https://github.com/vanoracai/Exploiting-Spatial-temporal-Relationships-for-3D-Pose-Estimation-via-Graph-Convolutional-Networks,This is the code for the paper ICCV 2019 Exploiting Spatial-temporal Relationships for 3D Pose Estimation via Graph Convolutional Networks in Pytorch. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,martinez2017simple,\cite{martinez2017simple},A simple yet effective baseline for 3d human pose estimation,http://arxiv.org/abs/1705.03098v2,"Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3-dimensional positions. With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, ""lifting"" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feed-forward network outperforms the best reported result by about 30\% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (\ie, using images as input) yields state of the art results -- this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",True,True,"Martinez, Julieta and Hossain, Rayat and Romero, Javier and Little, James J",2017.0,,,,,A simple yet effective baseline for 3d human pose estimation,A simple yet effective baseline for 3d human pose estimation,http://arxiv.org/pdf/1705.03098v2,"Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3-dimensional positions. With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, ""lifting"" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feed-forward network outperforms the best reported result by about 30\% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (\ie, using images as input) yields state of the art results -- this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zhao2019semantic,\cite{zhao2019semantic},{Semantic Graph Convolutional Networks for 3D Human Pose Regression},,,True,False,"Zhao, Long and Peng, Xi and Tian, Yu and Kapadia, Mubbasir and Metaxas, Dimitris N",2019.0,,,,,{Semantic Graph Convolutional Networks for 3D Human Pose Regression},Semantic Graph Convolutional Networks for 3D Human ...,https://openaccess.thecvf.com/content_CVPR_2019/papers/Zhao_Semantic_Graph_Convolutional_Networks_for_3D_Human_Pose_Regression_CVPR_2019_paper.pdf,"by L Zhao · 2019 · Cited by 714 — SemGCN is a novel network for regression tasks with graph data, capturing semantic information, and applied to 3D human pose regression." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zou2021modulated,\cite{zou2021modulated},Modulated graph convolutional network for {3D} human pose estimation,,,True,False,"Zou, Zhiming and Tang, Wei",2021.0,,,,,Modulated graph convolutional network for {3D} human pose estimation,Modulated Graph Convolutional Network for 3D Human Pose ...,https://ieeexplore.ieee.org/document/9710217/,The graph convolutional network (GCN) has recently achieved promising performance of 3D human pose estimation (HPE) by modeling the relationship among body "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zhao2022graformer,\cite{zhao2022graformer},{GraFormer: Graph-oriented Transformer for {3D} Pose Estimation},,,True,False,"Zhao, Weixi and Wang, Weiqiang and Tian, Yunjie",2022.0,,,,,{GraFormer: Graph-oriented Transformer for {3D} Pose Estimation},[PDF] GraFormer: Graph-Oriented Transformer for 3D Pose Estimation,https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_GraFormer_Graph-Oriented_Transformer_for_3D_Pose_Estimation_CVPR_2022_paper.pdf,"In this paper, we use a new transformer architecture by embedding graph convolution operations to improve the. 3D pose estimation. 3. Method. As shown in Figure" "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,ZhongTMM2024,\cite{ZhongTMM2024},{Frame-Padded Multiscale Transformer for Monocular {3D} Human Pose Estimation},,,True,False,"Zhong, Yuanhong and Yang, Guangxia and Zhong, Daidi and Yang, Xun and Wang, Shanshan",2024.0,,,10.1109/TMM.2023.3347095,IEEE Transactions on Multimedia,{Frame-Padded Multiscale Transformer for Monocular {3D} Human Pose Estimation},Frame-Padded Multiscale Transformer for Monocular 3D Human ...,https://dl.acm.org/doi/10.1109/TMM.2023.3347095,Abstract. Monocular 3D human pose estimation is an ill-posed problem in computer vision due to its depth ambiguity. Most existing works supplement the depth "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,WangTMM2024,\cite{WangTMM2024},{Exploiting Temporal Correlations for {3D} Human Pose Estimation},,,True,False,"Wang, Ruibin and Ying, Xianghua and Xing, Bowei",2024.0,,,10.1109/TMM.2023.3323874,IEEE Transactions on Multimedia,{Exploiting Temporal Correlations for {3D} Human Pose Estimation},Exploiting Temporal Correlations for 3D Human Pose ...,http://ieeexplore.ieee.org/document/10278485/,Exploiting the rich temporal information in human pose sequences to facilitate 3D pose estimation has garnered particular attention. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,tang20233d,\cite{tang20233d},{3D} human pose estimation with spatio-temporal criss-cross attention,,,True,False,"Tang, Zhenhua and Qiu, Zhaofan and Hao, Yanbin and Hong, Richang and Yao, Ting",2023.0,,,,,{3D} human pose estimation with spatio-temporal criss-cross attention,zhenhuat/STCFormer: (CVPR2023)3D Human Pose ...,https://github.com/zhenhuat/STCFormer,This is the readme file for the code release of 3D Human Pose Estimation with Spatio-Temporal Criss-cross Attention on PyTorch platform. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,li2022mhformer,\cite{li2022mhformer},MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation,http://arxiv.org/abs/2111.12707v4,"Estimating 3D human poses from monocular videos is a challenging task due to depth ambiguity and self-occlusion. Most existing works attempt to solve both issues by exploiting spatial and temporal relationships. However, those works ignore the fact that it is an inverse problem where multiple feasible solutions (i.e., hypotheses) exist. To relieve this limitation, we propose a Multi-Hypothesis Transformer (MHFormer) that learns spatio-temporal representations of multiple plausible pose hypotheses. In order to effectively model multi-hypothesis dependencies and build strong relationships across hypothesis features, the task is decomposed into three stages: (i) Generate multiple initial hypothesis representations; (ii) Model self-hypothesis communication, merge multiple hypotheses into a single converged representation and then partition it into several diverged hypotheses; (iii) Learn cross-hypothesis communication and aggregate the multi-hypothesis features to synthesize the final 3D pose. Through the above processes, the final representation is enhanced and the synthesized pose is much more accurate. Extensive experiments show that MHFormer achieves state-of-the-art results on two challenging datasets: Human3.6M and MPI-INF-3DHP. Without bells and whistles, its performance surpasses the previous best result by a large margin of 3% on Human3.6M. Code and models are available at \url{https://github.com/Vegetebird/MHFormer}.",True,True,"Li, Wenhao and Liu, Hong and Tang, Hao and Wang, Pichao and Van Gool, Luc",2022.0,,,,,MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation,Multi-Hypothesis Transformer for 3D Human Pose Estimation - arXiv,https://arxiv.org/abs/2111.12707,We propose a Multi-Hypothesis Transformer (MHFormer) that learns spatio-temporal representations of multiple plausible pose hypotheses. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,liu2023posynda,\cite{liu2023posynda},"PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D Human Pose Estimation",http://arxiv.org/abs/2308.09678v2,"Existing 3D human pose estimators face challenges in adapting to new datasets due to the lack of 2D-3D pose pairs in training sets. To overcome this issue, we propose \textit{Multi-Hypothesis \textbf{P}ose \textbf{Syn}thesis \textbf{D}omain \textbf{A}daptation} (\textbf{PoSynDA}) framework to bridge this data disparity gap in target domain. Typically, PoSynDA uses a diffusion-inspired structure to simulate 3D pose distribution in the target domain. By incorporating a multi-hypothesis network, PoSynDA generates diverse pose hypotheses and aligns them with the target domain. To do this, it first utilizes target-specific source augmentation to obtain the target domain distribution data from the source domain by decoupling the scale and position parameters. The process is then further refined through the teacher-student paradigm and low-rank adaptation. With extensive comparison of benchmarks such as Human3.6M and MPI-INF-3DHP, PoSynDA demonstrates competitive performance, even comparable to the target-trained MixSTE model\cite{zhang2022mixste}. This work paves the way for the practical application of 3D human pose estimation in unseen domains. The code is available at https://github.com/hbing-l/PoSynDA.",True,True,"Liu, Hanbing and He, Jun-Yan and Cheng, Zhi-Qi and Xiang, Wangmeng and Yang, Qize and Chai, Wenhao and Wang, Gaoang and Bao, Xu and Luo, Bin and Geng, Yifeng and others",2023.0,,,,,"PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D Human Pose Estimation",PoSynDA: Multi-Hypothesis Pose Synthesis Domain ...,https://github.com/hbing-l/PoSynDA,PoSynDA is a novel framework for 3D Human Pose Estimation (3D HPE) that addresses the challenges of adapting to new datasets due to the scarcity of 2D-3D "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,chen2023hdformer,\cite{chen2023hdformer},HDFormer: High-order Directed Transformer for 3D Human Pose Estimation,http://arxiv.org/abs/2302.01825v2,"Human pose estimation is a challenging task due to its structured data sequence nature. Existing methods primarily focus on pair-wise interaction of body joints, which is insufficient for scenarios involving overlapping joints and rapidly changing poses. To overcome these issues, we introduce a novel approach, the High-order Directed Transformer (HDFormer), which leverages high-order bone and joint relationships for improved pose estimation. Specifically, HDFormer incorporates both self-attention and high-order attention to formulate a multi-order attention module. This module facilitates first-order ""joint$\leftrightarrow$joint"", second-order ""bone$\leftrightarrow$joint"", and high-order ""hyperbone$\leftrightarrow$joint"" interactions, effectively addressing issues in complex and occlusion-heavy situations. In addition, modern CNN techniques are integrated into the transformer-based architecture, balancing the trade-off between performance and efficiency. HDFormer significantly outperforms state-of-the-art (SOTA) models on Human3.6M and MPI-INF-3DHP datasets, requiring only 1/10 of the parameters and significantly lower computational costs. Moreover, HDFormer demonstrates broad real-world applicability, enabling real-time, accurate 3D pose estimation. The source code is in https://github.com/hyer/HDFormer",True,True,"Chen, Hanyuan and He, Jun-Yan and Xiang, Wangmeng and Cheng, Zhi-Qi and Liu, Wei and Liu, Hanbing and Luo, Bin and Geng, Yifeng and Xie, Xuansong",2023.0,,,,,HDFormer: High-order Directed Transformer for 3D Human Pose Estimation,High-order Directed Transformer for 3D Human Pose Estimation,https://arxiv.org/abs/2302.01825,"HDFormer is a novel approach for 3D human pose estimation using high-order bone and joint relationships, addressing issues with overlapping" "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,hu2021conditional,\cite{hu2021conditional},Conditional Directed Graph Convolution for 3D Human Pose Estimation,http://arxiv.org/abs/2107.07797v2,"Graph convolutional networks have significantly improved 3D human pose estimation by representing the human skeleton as an undirected graph. However, this representation fails to reflect the articulated characteristic of human skeletons as the hierarchical orders among the joints are not explicitly presented. In this paper, we propose to represent the human skeleton as a directed graph with the joints as nodes and bones as edges that are directed from parent joints to child joints. By so doing, the directions of edges can explicitly reflect the hierarchical relationships among the nodes. Based on this representation, we further propose a spatial-temporal conditional directed graph convolution to leverage varying non-local dependence for different poses by conditioning the graph topology on input poses. Altogether, we form a U-shaped network, named U-shaped Conditional Directed Graph Convolutional Network, for 3D human pose estimation from monocular videos. To evaluate the effectiveness of our method, we conducted extensive experiments on two challenging large-scale benchmarks: Human3.6M and MPI-INF-3DHP. Both quantitative and qualitative results show that our method achieves top performance. Also, ablation studies show that directed graphs can better exploit the hierarchy of articulated human skeletons than undirected graphs, and the conditional connections can yield adaptive graph topologies for different poses.",True,True,"Hu, Wenbo and Zhang, Changgong and Zhan, Fangneng and Zhang, Lei and Wong, Tien-Tsin",2021.0,,,,,Conditional Directed Graph Convolution for 3D Human Pose Estimation,Conditional Directed Graph Convolution for 3D Human Pose Estimation,http://arxiv.org/pdf/2107.07797v2,"Graph convolutional networks have significantly improved 3D human pose estimation by representing the human skeleton as an undirected graph. However, this representation fails to reflect the articulated characteristic of human skeletons as the hierarchical orders among the joints are not explicitly presented. In this paper, we propose to represent the human skeleton as a directed graph with the joints as nodes and bones as edges that are directed from parent joints to child joints. By so doing, the directions of edges can explicitly reflect the hierarchical relationships among the nodes. Based on this representation, we further propose a spatial-temporal conditional directed graph convolution to leverage varying non-local dependence for different poses by conditioning the graph topology on input poses. Altogether, we form a U-shaped network, named U-shaped Conditional Directed Graph Convolutional Network, for 3D human pose estimation from monocular videos. To evaluate the effectiveness of our method, we conducted extensive experiments on two challenging large-scale benchmarks: Human3.6M and MPI-INF-3DHP. Both quantitative and qualitative results show that our method achieves top performance. Also, ablation studies show that directed graphs can better exploit the hierarchy of articulated human skeletons than undirected graphs, and the conditional connections can yield adaptive graph topologies for different poses." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,ci2019optimizing,\cite{ci2019optimizing},Optimizing network structure for {3D} human pose estimation,,,True,False,"Ci, Hai and Wang, Chunyu and Ma, Xiaoxuan and Wang, Yizhou",2019.0,,,,,Optimizing network structure for {3D} human pose estimation,Optimizing Network Structure for 3D Human Pose Estimation,https://openaccess.thecvf.com/content_ICCV_2019/papers/Ci_Optimizing_Network_Structure_for_3D_Human_Pose_Estimation_ICCV_2019_paper.pdf,by H Ci · 2019 · Cited by 312 — A 3D human pose is naturally represented by a skele- tal graph parameterized by the 3D locations of the body joints such as elbows and knees. See Figure 1. When "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,liu2020comprehensive,\cite{liu2020comprehensive},A comprehensive study of weight sharing in graph networks for {3D} human pose estimation,,,True,False,"Liu, Kenkun and Ding, Rongqi and Zou, Zhiming and Wang, Le and Tang, Wei",2020.0,,,,,A comprehensive study of weight sharing in graph networks for {3D} human pose estimation,A Comprehensive Study of Weight Sharing in Graph ...,https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550324.pdf,by K Liu · Cited by 182 — Graph convolutional networks (GCNs) have been applied to. 3D human pose estimation (HPE) from 2D body joint detections and have shown encouraging performance. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,wang2018non,\cite{wang2018non},Non-local Neural Networks,http://arxiv.org/abs/1711.07971v3,"Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, we present non-local operations as a generic family of building blocks for capturing long-range dependencies. Inspired by the classical non-local means method in computer vision, our non-local operation computes the response at a position as a weighted sum of the features at all positions. This building block can be plugged into many computer vision architectures. On the task of video classification, even without any bells and whistles, our non-local models can compete or outperform current competition winners on both Kinetics and Charades datasets. In static image recognition, our non-local models improve object detection/segmentation and pose estimation on the COCO suite of tasks. Code is available at https://github.com/facebookresearch/video-nonlocal-net .",True,True,"Wang, Xiaolong and Girshick, Ross and Gupta, Abhinav and He, Kaiming",2018.0,,,,,Non-local Neural Networks,[PDF] Non-Local Neural Networks - CVF Open Access,https://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Non-Local_Neural_Networks_CVPR_2018_paper.pdf,"Non-local operations capture long-range dependencies by computing a weighted sum of features at all positions, unlike local operations. They are efficient and" "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,gong2023diffpose,\cite{gong2023diffpose},DiffPose: Toward More Reliable 3D Pose Estimation,http://arxiv.org/abs/2211.16940v3,"Monocular 3D human pose estimation is quite challenging due to the inherent ambiguity and occlusion, which often lead to high uncertainty and indeterminacy. On the other hand, diffusion models have recently emerged as an effective tool for generating high-quality images from noise. Inspired by their capability, we explore a novel pose estimation framework (DiffPose) that formulates 3D pose estimation as a reverse diffusion process. We incorporate novel designs into our DiffPose to facilitate the diffusion process for 3D pose estimation: a pose-specific initialization of pose uncertainty distributions, a Gaussian Mixture Model-based forward diffusion process, and a context-conditioned reverse diffusion process. Our proposed DiffPose significantly outperforms existing methods on the widely used pose estimation benchmarks Human3.6M and MPI-INF-3DHP. Project page: https://gongjia0208.github.io/Diffpose/.",True,True,"Gong, Jia and Foo, Lin Geng and Fan, Zhipeng and Ke, Qiuhong and Rahmani, Hossein and Liu, Jun",2023.0,,,,,DiffPose: Toward More Reliable 3D Pose Estimation,DiffPose: Toward More Reliable 3D Pose Estimation,http://arxiv.org/pdf/2211.16940v3,"Monocular 3D human pose estimation is quite challenging due to the inherent ambiguity and occlusion, which often lead to high uncertainty and indeterminacy. On the other hand, diffusion models have recently emerged as an effective tool for generating high-quality images from noise. Inspired by their capability, we explore a novel pose estimation framework (DiffPose) that formulates 3D pose estimation as a reverse diffusion process. We incorporate novel designs into our DiffPose to facilitate the diffusion process for 3D pose estimation: a pose-specific initialization of pose uncertainty distributions, a Gaussian Mixture Model-based forward diffusion process, and a context-conditioned reverse diffusion process. Our proposed DiffPose significantly outperforms existing methods on the widely used pose estimation benchmarks Human3.6M and MPI-INF-3DHP. Project page: https://gongjia0208.github.io/Diffpose/." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,holmquist2023diffpose,\cite{holmquist2023diffpose},DiffPose: Multi-hypothesis Human Pose Estimation using Diffusion models,http://arxiv.org/abs/2211.16487v1,"Traditionally, monocular 3D human pose estimation employs a machine learning model to predict the most likely 3D pose for a given input image. However, a single image can be highly ambiguous and induces multiple plausible solutions for the 2D-3D lifting step which results in overly confident 3D pose predictors. To this end, we propose \emph{DiffPose}, a conditional diffusion model, that predicts multiple hypotheses for a given input image. In comparison to similar approaches, our diffusion model is straightforward and avoids intensive hyperparameter tuning, complex network structures, mode collapse, and unstable training. Moreover, we tackle a problem of the common two-step approach that first estimates a distribution of 2D joint locations via joint-wise heatmaps and consecutively approximates them based on first- or second-moment statistics. Since such a simplification of the heatmaps removes valid information about possibly correct, though labeled unlikely, joint locations, we propose to represent the heatmaps as a set of 2D joint candidate samples. To extract information about the original distribution from these samples we introduce our \emph{embedding transformer} that conditions the diffusion model. Experimentally, we show that DiffPose slightly improves upon the state of the art for multi-hypothesis pose estimation for simple poses and outperforms it by a large margin for highly ambiguous poses.",True,True,"Holmquist, Karl and Wandt, Bastian",2023.0,,,,,DiffPose: Multi-hypothesis Human Pose Estimation using Diffusion models,Multi-hypothesis Human Pose Estimation using Diffusion models,https://arxiv.org/abs/2211.16487,"We propose \emph{DiffPose}, a conditional diffusion model, that predicts multiple hypotheses for a given input image." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,fang2018learning,\cite{fang2018learning},"Learning Pose Grammar to Encode Human Body Configuration for 3D Pose Estimation",http://arxiv.org/abs/1710.06513v6,"In this paper, we propose a pose grammar to tackle the problem of 3D human pose estimation. Our model directly takes 2D pose as input and learns a generalized 2D-3D mapping function. The proposed model consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bi-directional RNNs (BRNN) on the top to explicitly incorporate a set of knowledge regarding human body configuration (i.e., kinematics, symmetry, motor coordination). The proposed model thus enforces high-level constraints over human poses. In learning, we develop a pose sample simulator to augment training samples in virtual camera views, which further improves our model generalizability. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization capability of different methods. We empirically observe that most state-of-the-art methods encounter difficulty under such setting while our method can well handle such challenges.",True,True,"Fang, Hao-Shu and Xu, Yuanlu and Wang, Wenguan and Liu, Xiaobai and Zhu, Song-Chun",2018.0,,,,,"Learning Pose Grammar to Encode Human Body Configuration for 3D Pose Estimation",[PDF] Learning Pose Grammar to Encode Human Body Configuration for ...,https://cdn.aaai.org/ojs/12270/12270-13-15798-1-2-20201228.pdf,"In this paper, we propose a pose grammar to tackle the prob- lem of 3D human pose estimation. Our model directly takes. 2D pose as input and learns a" "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,he2021db,\cite{he2021db},{DB-LSTM: Densely-connected Bi-directional LSTM for human action recognition},,,True,False,"He, Jun-Yan and Wu, Xiao and Cheng, Zhi-Qi and Yuan, Zhaoquan and Jiang, Yu-Gang",2021.0,,,,Neurocomputing,{DB-LSTM: Densely-connected Bi-directional LSTM for human action recognition},Densely-connected Bi-directional LSTM for human action ...,https://www.sciencedirect.com/science/article/pii/S0925231220317859,"To boost the effectiveness and robustness of modeling long-range action recognition, a Densely-connected Bi-directional LSTM (DB-LSTM) network is novelly proposed to model the visual and temporal associations in both forward and backward directions. To overcome the drawbacks of existing methods, a long-range temporal model for human action recognition is novelly proposed in this paper, which comprehensively integrates the spatial, short-term as well as long-term temporal patterns of human actions. * •A deep learning model based on long-range modeling is novelly proposed for action recognition, which captures the global appearance and local motion dynamics, meanwhile integrates visual appearance and long-range temporal dynamics of human actions. In this paper, we novelly propose a Densely-connected Bi-directional LSTM (DB-LSTM) to capture the visual and temporal patterns of human actions, exploring a variety of insights around the long-range temporal pattern modeling." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zeng2021learning,\cite{zeng2021learning},Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation,http://arxiv.org/abs/2108.07181v2,"Various deep learning techniques have been proposed to solve the single-view 2D-to-3D pose estimation problem. While the average prediction accuracy has been improved significantly over the years, the performance on hard poses with depth ambiguity, self-occlusion, and complex or rare poses is still far from satisfactory. In this work, we target these hard poses and present a novel skeletal GNN learning solution. To be specific, we propose a hop-aware hierarchical channel-squeezing fusion layer to effectively extract relevant information from neighboring nodes while suppressing undesired noises in GNN learning. In addition, we propose a temporal-aware dynamic graph construction procedure that is robust and effective for 3D pose estimation. Experimental results on the Human3.6M dataset show that our solution achieves 10.3\% average prediction accuracy improvement and greatly improves on hard poses over state-of-the-art techniques. We further apply the proposed technique on the skeleton-based action recognition task and also achieve state-of-the-art performance. Our code is available at https://github.com/ailingzengzzz/Skeletal-GNN.",True,True,"Zeng, Ailing and Sun, Xiao and Yang, Lei and Zhao, Nanxuan and Liu, Minhao and Xu, Qiang",2021.0,,,,,Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation,Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation,http://arxiv.org/pdf/2108.07181v2,"Various deep learning techniques have been proposed to solve the single-view 2D-to-3D pose estimation problem. While the average prediction accuracy has been improved significantly over the years, the performance on hard poses with depth ambiguity, self-occlusion, and complex or rare poses is still far from satisfactory. In this work, we target these hard poses and present a novel skeletal GNN learning solution. To be specific, we propose a hop-aware hierarchical channel-squeezing fusion layer to effectively extract relevant information from neighboring nodes while suppressing undesired noises in GNN learning. In addition, we propose a temporal-aware dynamic graph construction procedure that is robust and effective for 3D pose estimation. Experimental results on the Human3.6M dataset show that our solution achieves 10.3\% average prediction accuracy improvement and greatly improves on hard poses over state-of-the-art techniques. We further apply the proposed technique on the skeleton-based action recognition task and also achieve state-of-the-art performance. Our code is available at https://github.com/ailingzengzzz/Skeletal-GNN." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zhang2023learning,\cite{zhang2023learning},Learning Enriched Hop-Aware Correlation for Robust {3D} Human Pose Estimation,,,True,False,"Zhang, Shengping and Wang, Chenyang and Nie, Liqiang and Yao, Hongxun and Huang, Qingming and Tian, Qi",2023.0,,,,International Journal of Computer Vision,Learning Enriched Hop-Aware Correlation for Robust {3D} Human Pose Estimation,Learning Enriched Hop-Aware Correlation for Robust 3D Human ...,https://link.springer.com/article/10.1007/s11263-023-01770-5,"This paper proposes a parallel hop-aware graph attention network (PHGANet) for 3D human pose estimation, which learns enriched hop-aware correlation of the" "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,li2022exploiting,\cite{li2022exploiting},"Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation",http://arxiv.org/abs/2103.14304v8,"Despite the great progress in 3D human pose estimation from videos, it is still an open problem to take full advantage of a redundant 2D pose sequence to learn representative representations for generating one 3D pose. To this end, we propose an improved Transformer-based architecture, called Strided Transformer, which simply and effectively lifts a long sequence of 2D joint locations to a single 3D pose. Specifically, a Vanilla Transformer Encoder (VTE) is adopted to model long-range dependencies of 2D pose sequences. To reduce the redundancy of the sequence, fully-connected layers in the feed-forward network of VTE are replaced with strided convolutions to progressively shrink the sequence length and aggregate information from local contexts. The modified VTE is termed as Strided Transformer Encoder (STE), which is built upon the outputs of VTE. STE not only effectively aggregates long-range information to a single-vector representation in a hierarchical global and local fashion, but also significantly reduces the computation cost. Furthermore, a full-to-single supervision scheme is designed at both full sequence and single target frame scales applied to the outputs of VTE and STE, respectively. This scheme imposes extra temporal smoothness constraints in conjunction with the single target frame supervision and hence helps produce smoother and more accurate 3D poses. The proposed Strided Transformer is evaluated on two challenging benchmark datasets, Human3.6M and HumanEva-I, and achieves state-of-the-art results with fewer parameters. Code and models are available at \url{https://github.com/Vegetebird/StridedTransformer-Pose3D}.",True,True,"Li, Wenhao and Liu, Hong and Ding, Runwei and Liu, Mengyuan and Wang, Pichao and Yang, Wenming",2022.0,,,,IEEE Transactions on Multimedia,"Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation",Vegetebird/StridedTransformer-Pose3D,https://github.com/Vegetebird/StridedTransformer-Pose3D,Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation. This is the official implementation of the approach described in the paper. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zhang2022mixste,\cite{zhang2022mixste},"MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video",http://arxiv.org/abs/2203.00859v4,"Recent transformer-based solutions have been introduced to estimate 3D human pose from 2D keypoint sequence by considering body joints among all frames globally to learn spatio-temporal correlation. We observe that the motions of different joints differ significantly. However, the previous methods cannot efficiently model the solid inter-frame correspondence of each joint, leading to insufficient learning of spatial-temporal correlation. We propose MixSTE (Mixed Spatio-Temporal Encoder), which has a temporal transformer block to separately model the temporal motion of each joint and a spatial transformer block to learn inter-joint spatial correlation. These two blocks are utilized alternately to obtain better spatio-temporal feature encoding. In addition, the network output is extended from the central frame to entire frames of the input video, thereby improving the coherence between the input and output sequences. Extensive experiments are conducted on three benchmarks (Human3.6M, MPI-INF-3DHP, and HumanEva). The results show that our model outperforms the state-of-the-art approach by 10.9% P-MPJPE and 7.6% MPJPE. The code is available at https://github.com/JinluZhang1126/MixSTE.",True,True,"Zhang, Jinlu and Tu, Zhigang and Yang, Jianyu and Chen, Yujin and Yuan, Junsong",2022.0,,,,,"MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video",MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human ...,https://github.com/JinluZhang1126/MixSTE,Official implementation of CVPR 2022 paper(MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video). "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,vaswani2017attention,\cite{vaswani2017attention},Attention Is All You Need,http://arxiv.org/abs/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",True,True,"Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, {\L}ukasz and Polosukhin, Illia",2017.0,,,,Advances in Neural Information Processing Systems,Attention Is All You Need,Attention Is All You Need,http://arxiv.org/pdf/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zhou2019hemlets,\cite{zhou2019hemlets},"HEMlets Pose: Learning Part-Centric Heatmap Triplets for Accurate 3D Human Pose Estimation",http://arxiv.org/abs/1910.12032v1,"Estimating 3D human pose from a single image is a challenging task. This work attempts to address the uncertainty of lifting the detected 2D joints to the 3D space by introducing an intermediate state - Part-Centric Heatmap Triplets (HEMlets), which shortens the gap between the 2D observation and the 3D interpretation. The HEMlets utilize three joint-heatmaps to represent the relative depth information of the end-joints for each skeletal body part. In our approach, a Convolutional Network (ConvNet) is first trained to predict HEMlests from the input image, followed by a volumetric joint-heatmap regression. We leverage on the integral operation to extract the joint locations from the volumetric heatmaps, guaranteeing end-to-end learning. Despite the simplicity of the network design, the quantitative comparisons show a significant performance improvement over the best-of-grade method (by 20% on Human3.6M). The proposed method naturally supports training with ""in-the-wild"" images, where only weakly-annotated relative depth information of skeletal joints is available. This further improves the generalization ability of our model, as validated by qualitative comparisons on outdoor images.",True,True,"Zhou, Kun and Han, Xiaoguang and Jiang, Nianjuan and Jia, Kui and Lu, Jiangbo",2019.0,,,,,"HEMlets Pose: Learning Part-Centric Heatmap Triplets for Accurate 3D Human Pose Estimation",redrock303/HEMlets,https://github.com/redrock303/HEMlets,Here we provide our implementation of HEMlets PoSh: Learning Part-Centric Heatmap Triplets for 3D Human Pose and Shape Estimation. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,zeng2020srnet,\cite{zeng2020srnet},"SRNet: Improving Generalization in 3D Human Pose Estimation with a Split-and-Recombine Approach",http://arxiv.org/abs/2007.09389v1,"Human poses that are rare or unseen in a training set are challenging for a network to predict. Similar to the long-tailed distribution problem in visual recognition, the small number of examples for such poses limits the ability of networks to model them. Interestingly, local pose distributions suffer less from the long-tail problem, i.e., local joint configurations within a rare pose may appear within other poses in the training set, making them less rare. We propose to take advantage of this fact for better generalization to rare and unseen poses. To be specific, our method splits the body into local regions and processes them in separate network branches, utilizing the property that a joint position depends mainly on the joints within its local body region. Global coherence is maintained by recombining the global context from the rest of the body into each branch as a low-dimensional vector. With the reduced dimensionality of less relevant body areas, the training set distribution within network branches more closely reflects the statistics of local poses instead of global body poses, without sacrificing information important for joint inference. The proposed split-and-recombine approach, called SRNet, can be easily adapted to both single-image and temporal models, and it leads to appreciable improvements in the prediction of rare and unseen poses.",True,True,"Zeng, Ailing and Sun, Xiao and Huang, Fuyang and Liu, Minhao and Xu, Qiang and Lin, Stephen",2020.0,,,,,"SRNet: Improving Generalization in 3D Human Pose Estimation with a Split-and-Recombine Approach","GitHub - ailingzengzzz/Split-and-Recombine-Net: Code for ""SRNet",https://github.com/ailingzengzzz/Split-and-Recombine-Net,This is the original PyTorch implementation of the following work: SRNet: Improving Generalization in 3D Human Pose Estimation with a Split-and-Recombine "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,xue2022boosting,\cite{xue2022boosting},Boosting monocular {3D} human pose estimation with part aware attention,,,True,False,"Xue, Youze and Chen, Jiansheng and Gu, Xiangming and Ma, Huimin and Ma, Hongbing",2022.0,,,,IEEE Transactions on Image Processing,Boosting monocular {3D} human pose estimation with part aware attention,Boosting Monocular 3D Human Pose Estimation With Part Aware ...,https://ieeexplore.ieee.org/iel7/83/9626658/09798770.pdf,"We thus propose the Part Aware. Dictionary Attention module to calculate the attention for the part-wise features of input in a dictionary, which contains." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,wu2022hpgcn,\cite{wu2022hpgcn},{HPGCN: Hierarchical poselet-guided graph convolutional network for {3D} pose estimation},,,True,False,"Wu, Yongpeng and Kong, Dehui and Wang, Shaofan and Li, Jinghua and Yin, Baocai",2022.0,,,,Neurocomputing,{HPGCN: Hierarchical poselet-guided graph convolutional network for {3D} pose estimation},HPGCN: Hierarchical poselet-guided graph convolutional network ...,https://www.sciencedirect.com/science/article/pii/S0925231221016817,We propose a hierarchical poselet-guided graph convolutional network (HPGCN) for 3D pose estimation from 2D poses. "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,xu2021graph,\cite{xu2021graph},Graph Stacked Hourglass Networks for 3D Human Pose Estimation,http://arxiv.org/abs/2103.16385v1,"In this paper, we propose a novel graph convolutional network architecture, Graph Stacked Hourglass Networks, for 2D-to-3D human pose estimation tasks. The proposed architecture consists of repeated encoder-decoder, in which graph-structured features are processed across three different scales of human skeletal representations. This multi-scale architecture enables the model to learn both local and global feature representations, which are critical for 3D human pose estimation. We also introduce a multi-level feature learning approach using different-depth intermediate features and show the performance improvements that result from exploiting multi-scale, multi-level feature representations. Extensive experiments are conducted to validate our approach, and the results show that our model outperforms the state-of-the-art.",True,True,"Xu, Tianhan and Takano, Wataru",2021.0,,,,,Graph Stacked Hourglass Networks for 3D Human Pose Estimation,Graph Stacked Hourglass Networks for 3D Human Pose Estimation,http://arxiv.org/pdf/2103.16385v1,"In this paper, we propose a novel graph convolutional network architecture, Graph Stacked Hourglass Networks, for 2D-to-3D human pose estimation tasks. The proposed architecture consists of repeated encoder-decoder, in which graph-structured features are processed across three different scales of human skeletal representations. This multi-scale architecture enables the model to learn both local and global feature representations, which are critical for 3D human pose estimation. We also introduce a multi-level feature learning approach using different-depth intermediate features and show the performance improvements that result from exploiting multi-scale, multi-level feature representations. Extensive experiments are conducted to validate our approach, and the results show that our model outperforms the state-of-the-art." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,hua2022unet,\cite{hua2022unet},Weakly-supervised {3D} human pose estimation with cross-view U-shaped graph convolutional network,,,True,False,"Hua, Guoliang and Liu, Hong and Li, Wenhao and Zhang, Qian and Ding, Runwei and Xu, Xin",2022.0,,,,IEEE Transactions on Multimedia,Weakly-supervised {3D} human pose estimation with cross-view U-shaped graph convolutional network,Weakly-supervised 3D Human Pose Estimation with Cross-view U ...,https://arxiv.org/abs/2105.10882,"[2105.10882] Weakly-supervised 3D Human Pose Estimation with Cross-view U-shaped Graph Convolutional Network **arXiv:2105.10882** (cs) Title:Weakly-supervised 3D Human Pose Estimation with Cross-view U-shaped Graph Convolutional Network View a PDF of the paper titled Weakly-supervised 3D Human Pose Estimation with Cross-view U-shaped Graph Convolutional Network, by Guoliang Hua and 5 other authors In this paper, we propose a simple yet effective pipeline for weakly-supervised cross-view 3D human pose estimation. (or arXiv:2105.10882v2 [cs.CV] for this version) View a PDF of the paper titled Weakly-supervised 3D Human Pose Estimation with Cross-view U-shaped Graph Convolutional Network, by Guoliang Hua and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle " "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,wu2022p2t,\cite{wu2022p2t},P2T: Pyramid Pooling Transformer for Scene Understanding,http://arxiv.org/abs/2106.12011v6,"Recently, the vision transformer has achieved great success by pushing the state-of-the-art of various vision tasks. One of the most challenging problems in the vision transformer is that the large sequence length of image tokens leads to high computational cost (quadratic complexity). A popular solution to this problem is to use a single pooling operation to reduce the sequence length. This paper considers how to improve existing vision transformers, where the pooled feature extracted by a single pooling operation seems less powerful. To this end, we note that pyramid pooling has been demonstrated to be effective in various vision tasks owing to its powerful ability in context abstraction. However, pyramid pooling has not been explored in backbone network design. To bridge this gap, we propose to adapt pyramid pooling to Multi-Head Self-Attention (MHSA) in the vision transformer, simultaneously reducing the sequence length and capturing powerful contextual features. Plugged with our pooling-based MHSA, we build a universal vision transformer backbone, dubbed Pyramid Pooling Transformer (P2T). Extensive experiments demonstrate that, when applied P2T as the backbone network, it shows substantial superiority in various vision tasks such as image classification, semantic segmentation, object detection, and instance segmentation, compared to previous CNN- and transformer-based networks. The code will be released at https://github.com/yuhuan-wu/P2T.",True,True,"Wu, Yu-Huan and Liu, Yun and Zhan, Xin and Cheng, Ming-Ming",2022.0,,,,IEEE Transactions on Pattern Analysis and Machine Intelligence,P2T: Pyramid Pooling Transformer for Scene Understanding,P2T: Pyramid Pooling Transformer for Scene Understanding,http://arxiv.org/pdf/2106.12011v6,"Recently, the vision transformer has achieved great success by pushing the state-of-the-art of various vision tasks. One of the most challenging problems in the vision transformer is that the large sequence length of image tokens leads to high computational cost (quadratic complexity). A popular solution to this problem is to use a single pooling operation to reduce the sequence length. This paper considers how to improve existing vision transformers, where the pooled feature extracted by a single pooling operation seems less powerful. To this end, we note that pyramid pooling has been demonstrated to be effective in various vision tasks owing to its powerful ability in context abstraction. However, pyramid pooling has not been explored in backbone network design. To bridge this gap, we propose to adapt pyramid pooling to Multi-Head Self-Attention (MHSA) in the vision transformer, simultaneously reducing the sequence length and capturing powerful contextual features. Plugged with our pooling-based MHSA, we build a universal vision transformer backbone, dubbed Pyramid Pooling Transformer (P2T). Extensive experiments demonstrate that, when applied P2T as the backbone network, it shows substantial superiority in various vision tasks such as image classification, semantic segmentation, object detection, and instance segmentation, compared to previous CNN- and transformer-based networks. The code will be released at https://github.com/yuhuan-wu/P2T." "Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation",2506.02853v1,PVT,\cite{PVT},"Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions",http://arxiv.org/abs/2102.12122v2,"Although using convolutional neural networks (CNNs) as backbones achieves great successes in computer vision, this work investigates a simple backbone network useful for many dense prediction tasks without convolutions. Unlike the recently-proposed Transformer model (e.g., ViT) that is specially designed for image classification, we propose Pyramid Vision Transformer~(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to prior arts. (1) Different from ViT that typically has low-resolution outputs and high computational and memory cost, PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions but also using a progressive shrinking pyramid to reduce computations of large feature maps. (2) PVT inherits the advantages from both CNN and Transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing CNN backbones. (3) We validate PVT by conducting extensive experiments, showing that it boosts the performance of many downstream tasks, e.g., object detection, semantic, and instance segmentation. For example, with a comparable number of parameters, RetinaNet+PVT achieves 40.4 AP on the COCO dataset, surpassing RetinNet+ResNet50 (36.3 AP) by 4.1 absolute AP. We hope PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future researches. Code is available at https://github.com/whai362/PVT.",True,True,"Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling",2021.0,,,,,"Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions",Pyramid Vision Transformer: A Versatile Backbone for Dense ... - arXiv,https://arxiv.org/abs/2102.12122,"Image 4: arxiv logo>cs> arXiv:2102.12122 Authors:Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao View a PDF of the paper titled Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions, by Wenhai Wang and 8 other authors View a PDF of the paper titled Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions, by Wenhai Wang and 8 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " Probabilistic Online Event Downsampling,2506.02547v1,cohen2018spatial,\cite{cohen2018spatial},Spatial and temporal downsampling in event-based visual classification,,,True,False,"Cohen, Gregory and Afshar, Saeed and Orchard, Garrick and Tapson, Jonathan and Benosman, Ryad and van Schaik, Andre",2018.0,,,,IEEE Transactions on Neural Networks and Learning Systems,Spatial and temporal downsampling in event-based visual classification,Spatial and Temporal Downsampling in Event-Based ...,https://www.researchgate.net/publication/322566649_Spatial_and_Temporal_Downsampling_in_Event-Based_Visual_Classification,"The results show that both spatial downsampling and temporal downsampling produce improved classification accuracy and, additionally, a lower overall data rate." Probabilistic Online Event Downsampling,2506.02547v1,ghoshevdownsampling,\cite{ghoshevdownsampling},EvDownsampling: a robust method for downsampling event camera data,,,True,False,"Ghosh, Anindya and Nowotny, Thomas and Knight, James",,,,,,EvDownsampling: a robust method for downsampling event camera data,a robust method for downsampling event camera data,https://sussex.figshare.com/articles/conference_contribution/EvDownsampling_a_robust_method_for_downsampling_event_camera_data/26970640,by A Ghosh · Cited by 1 — We present a bio-inspired spatio-temporal downsampling technique that can downsample event streams by factors of up to 16 times. Probabilistic Online Event Downsampling,2506.02547v1,barrios2018less,\cite{barrios2018less},Less data same information for event-based sensors: A bioinspired filtering and data reduction algorithm,,,True,False,"Barrios-Avil{\'e}s, Juan and Rosado-Mu{\~n}oz, Alfredo and Medus, Leandro D and Bataller-Mompe{\'a}n, Manuel and Guerrero-Mart{\'\i}nez, Juan F",2018.0,,,,Sensors,Less data same information for event-based sensors: A bioinspired filtering and data reduction algorithm,Less Data Same Information for Event-Based Sensors: A Bioinspired ...,https://pmc.ncbi.nlm.nih.gov/articles/PMC6308842/,This work proposes a filtering algorithm (LDSI—Less Data Same Information) which reduces the generated data from event-based sensors without loss of relevant Probabilistic Online Event Downsampling,2506.02547v1,gupta2020implementing,\cite{gupta2020implementing},"Implementing a foveal-pit inspired filter in a Spiking Convolutional Neural Network: a preliminary study",http://arxiv.org/abs/2105.14326v1,"We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding. The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library. We have evaluated the performance of our model on two publicly available datasets - one for digit recognition task, and the other for vehicle recognition task. The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function. This is an improvement over around 57% accuracy obtained with the alternate approach of performing the classification without any kind of neural filtering. Overall, our proof-of-concept study indicates that introducing biologically plausible filtering in existing SCNN architecture will work well with noisy input images such as those in our vehicle recognition task. Based on our results, we plan to enhance our SCNN by integrating lateral inhibition-based redundancy reduction prior to rank-ordering, which will further improve the classification accuracy by the network.",True,True,"Gupta, Shriya TP and Bhattacharya, Basabdatta Sen",2020.0,,,,,"Implementing a foveal-pit inspired filter in a Spiking Convolutional Neural Network: a preliminary study",(PDF) Implementing a foveal-pit inspired filter in a Spiking ...,https://www.researchgate.net/publication/352016174_Implementing_a_foveal-pit_inspired_filter_in_a_Spiking_Convolutional_Neural_Network_a_preliminary_study,We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order Probabilistic Online Event Downsampling,2506.02547v1,Gruel_2023_WACV,\cite{Gruel_2023_WACV},Performance Comparison of DVS Data Spatial Downscaling Methods Using Spiking Neural Networks,,,True,False,"Gruel, Am\'elie and Martinet, Jean and Linares-Barranco, Bernab\'e and Serrano-Gotarredona, Teresa",2023.0,January,,,,Performance Comparison of DVS Data Spatial Downscaling Methods Using Spiking Neural Networks,Performance comparison of DVS data spatial downscaling ...,https://openaccess.thecvf.com/content/WACV2023/supplemental/Gruel_Performance_Comparison_of_WACV_2023_supplemental.pdf,"Performance comparison of DVS data spatial downscaling methods using Spiking Neural Networks. Supplementary Material. Amélie Gruel. CNRS, i3S, Université Côte" Probabilistic Online Event Downsampling,2506.02547v1,ghosh2023insect,\cite{ghosh2023insect},Insect-inspired Spatio-temporal Downsampling of Event-based Input,,,True,False,"Ghosh, Anindya and Nowotny, Thomas and Knight, James C",2023.0,,,,,Insect-inspired Spatio-temporal Downsampling of Event-based Input,Insect-inspired Spatio-temporal Downsampling of Event-based Input,https://dl.acm.org/doi/pdf/10.1145/3589737.3605994,We show that our downsampled event streams achieve high fidelity with a hypothetical low-resolution event camera and improve classification performance on Probabilistic Online Event Downsampling,2506.02547v1,rizzo2023neuromorphic,\cite{rizzo2023neuromorphic},Neuromorphic downsampling of event-based camera output,,,True,False,"Rizzo, Charles P and Schuman, Catherine D and Plank, James S",2023.0,,,,,Neuromorphic downsampling of event-based camera output,Neuromorphic Downsampling of Event-Based Camera Output,https://dl.acm.org/doi/10.1145/3584954.3584962,We construct multiple neuromorphic networks that downsample the camera data so as to make training more effective. Probabilistic Online Event Downsampling,2506.02547v1,bisulco2020near,\cite{bisulco2020near},"Near-chip Dynamic Vision Filtering for Low-Bandwidth Pedestrian Detection",http://arxiv.org/abs/2004.01689v1,"This paper presents a novel end-to-end system for pedestrian detection using Dynamic Vision Sensors (DVSs). We target applications where multiple sensors transmit data to a local processing unit, which executes a detection algorithm. Our system is composed of (i) a near-chip event filter that compresses and denoises the event stream from the DVS, and (ii) a Binary Neural Network (BNN) detection module that runs on a low-computation edge computing device (in our case a STM32F4 microcontroller). We present the system architecture and provide an end-to-end implementation for pedestrian detection in an office environment. Our implementation reduces transmission size by up to 99.6% compared to transmitting the raw event stream. The average packet size in our system is only 1397 bits, while 307.2 kb are required to send an uncompressed DVS time window. Our detector is able to perform a detection every 450 ms, with an overall testing F1 score of 83%. The low bandwidth and energy properties of our system make it ideal for IoT applications.",True,True,"Bisulco, Anthony and Ojeda, Fernando Cladera and Isler, Volkan and Lee, Daniel Dongyuel",2020.0,,,,,"Near-chip Dynamic Vision Filtering for Low-Bandwidth Pedestrian Detection",Near-Chip Dynamic Vision Filtering for Low-Bandwidth ...,https://ieeexplore.ieee.org/document/9155035/,by A Bisulco · 2020 · Cited by 12 — This paper presents a novel end-to-end system for pedestrian detection using Dynamic Vision Sensors (DVSs). We target applications where multiple sensors Probabilistic Online Event Downsampling,2506.02547v1,bi2019graph,\cite{bi2019graph},Graph-Based Object Classification for Neuromorphic Vision Sensing,http://arxiv.org/abs/1908.06648v1,"Neuromorphic vision sensing (NVS)\ devices represent visual information as sequences of asynchronous discrete events (a.k.a., ``spikes'') in response to changes in scene reflectance. Unlike conventional active pixel sensing (APS), NVS allows for significantly higher event sampling rates at substantially increased energy efficiency and robustness to illumination changes. However, object classification with NVS streams cannot leverage on state-of-the-art convolutional neural networks (CNNs), since NVS does not produce frame representations. To circumvent this mismatch between sensing and processing with CNNs, we propose a compact graph representation for NVS. We couple this with novel residual graph CNN architectures and show that, when trained on spatio-temporal NVS data for object classification, such residual graph CNNs preserve the spatial and temporal coherence of spike events, while requiring less computation and memory. Finally, to address the absence of large real-world NVS datasets for complex recognition tasks, we present and make available a 100k dataset of NVS recordings of the American sign language letters, acquired with an iniLabs DAVIS240c device under real-world conditions.",True,True,"Bi, Yin and Chadha, Aaron and Abbas, Alhabib and Bourtsoulatze, Eirina and Andreopoulos, Yiannis",2019.0,,,,,Graph-Based Object Classification for Neuromorphic Vision Sensing,Graph-based Object Classification for Neuromorphic Vision Sensing,https://github.com/PIX2NVS/NVS2Graph,Our goal is to represent the stream of spike events from neuromorphic vision sensors as a graph and perform convolution on the graph for object classification. Probabilistic Online Event Downsampling,2506.02547v1,gruel2023frugal,\cite{gruel2023frugal},Frugal event data: how small is too small? A human performance assessment with shrinking data,,,True,False,"Gruel, Am{\'e}lie and Carreras, Luc{\'\i}a Trillo and Garc{\'\i}a, Marina Bueno and Kupczyk, Ewa and Martinet, Jean",2023.0,,,,,Frugal event data: how small is too small? A human performance assessment with shrinking data,Frugal event data: how small is too small? A human performance ...,https://ieeexplore.ieee.org/document/10208584/,Frugal event data: how small is too small? A human performance assessment with shrinking data. Abstract: When designing embedded computer vision systems with Probabilistic Online Event Downsampling,2506.02547v1,araghi2024pushing,\cite{araghi2024pushing},"Pushing the boundaries of event subsampling in event-based video classification using CNNs",http://arxiv.org/abs/2409.08953v1,"Event cameras offer low-power visual sensing capabilities ideal for edge-device applications. However, their high event rate, driven by high temporal details, can be restrictive in terms of bandwidth and computational resources. In edge AI applications, determining the minimum amount of events for specific tasks can allow reducing the event rate to improve bandwidth, memory, and processing efficiency. In this paper, we study the effect of event subsampling on the accuracy of event data classification using convolutional neural network (CNN) models. Surprisingly, across various datasets, the number of events per video can be reduced by an order of magnitude with little drop in accuracy, revealing the extent to which we can push the boundaries in accuracy vs. event rate trade-off. Additionally, we also find that lower classification accuracy in high subsampling rates is not solely attributable to information loss due to the subsampling of the events, but that the training of CNNs can be challenging in highly subsampled scenarios, where the sensitivity to hyperparameters increases. We quantify training instability across multiple event-based classification datasets using a novel metric for evaluating the hyperparameter sensitivity of CNNs in different subsampling settings. Finally, we analyze the weight gradients of the network to gain insight into this instability.",True,True,"Araghi, Hesam and van Gemert, Jan and Tomen, Nergis",2024.0,,,,arXiv preprint arXiv:2409.08953,"Pushing the boundaries of event subsampling in event-based video classification using CNNs",Pushing the Boundaries of Event Subsampling in Event-Based ...,https://link.springer.com/chapter/10.1007/978-3-031-92460-6_17,"In this paper, we study the effect of event subsampling on the accuracy of event data classification using convolutional neural network (CNN) models." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,buda2018systematic,\cite{buda2018systematic},"A systematic study of the class imbalance problem in convolutional neural networks",http://arxiv.org/abs/1710.05381v2,"In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.",True,True,"Buda, Mateusz and Maki, Atsuto and Mazurowski, Maciej A",2018.0,,,,Neural networks,"A systematic study of the class imbalance problem in convolutional neural networks",A systematic study of the class imbalance problem in convolutional ...,https://arxiv.org/abs/1710.05381,"In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs)" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,byrd2019effect,\cite{byrd2019effect},What is the Effect of Importance Weighting in Deep Learning?,http://arxiv.org/abs/1812.03372v3,"Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning. While the effect of importance weighting is well-characterized for low-capacity misspecified models, little is known about how it impacts over-parameterized, deep neural networks. This work is inspired by recent theoretical results showing that on (linearly) separable data, deep linear networks optimized by SGD learn weight-agnostic solutions, prompting us to ask, for realistic deep networks, for which many practical datasets are separable, what is the effect of importance weighting? We present the surprising finding that while importance weighting impacts models early in training, its effect diminishes over successive epochs. Moreover, while L2 regularization and batch normalization (but not dropout), restore some of the impact of importance weighting, they express the effect via (seemingly) the wrong abstraction: why should practitioners tweak the L2 regularization, and by how much, to produce the correct weighting effect? Our experiments confirm these findings across a range of architectures and datasets.",True,True,"Byrd, Jonathon and Lipton, Zachary",2019.0,,,,,What is the Effect of Importance Weighting in Deep Learning?,What is the Effect of Importance Weighting in Deep Learning?,http://arxiv.org/pdf/1812.03372v3,"Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning. While the effect of importance weighting is well-characterized for low-capacity misspecified models, little is known about how it impacts over-parameterized, deep neural networks. This work is inspired by recent theoretical results showing that on (linearly) separable data, deep linear networks optimized by SGD learn weight-agnostic solutions, prompting us to ask, for realistic deep networks, for which many practical datasets are separable, what is the effect of importance weighting? We present the surprising finding that while importance weighting impacts models early in training, its effect diminishes over successive epochs. Moreover, while L2 regularization and batch normalization (but not dropout), restore some of the impact of importance weighting, they express the effect via (seemingly) the wrong abstraction: why should practitioners tweak the L2 regularization, and by how much, to produce the correct weighting effect? Our experiments confirm these findings across a range of architectures and datasets." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,drummond2003c4,\cite{drummond2003c4},"C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling",,,True,False,"Drummond, Chris and Holte, Robert C and others",2003.0,,,,,"C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling","[PDF] C4.5, Class Imbalance, and Cost Sensitivity: Why Under-Sampling ...",https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=4a57c0ffeec2665caf8e11574ce5a9618304b979,This paper shows that using C4. 5 with under- sampling establishes a reasonable standard for algorithmic comparison. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,pouyanfar2018dynamic,\cite{pouyanfar2018dynamic},Dynamic sampling in convolutional neural networks for imbalanced data classification,,,True,False,"Pouyanfar, Samira and Tao, Yudong and Mohan, Anup and Tian, Haiman and Kaseb, Ahmed S and Gauen, Kent and Dailey, Ryan and Aghajanzadeh, Sarah and Lu, Yung-Hsiang and Chen, Shu-Ching and others",2018.0,,,,,Dynamic sampling in convolutional neural networks for imbalanced data classification,Dynamic Sampling in Convolutional Neural Networks ... - IEEE Xplore,https://ieeexplore.ieee.org/document/8396983,"Dynamic Sampling in Convolutional Neural Networks for Imbalanced Data Classification | IEEE Conference Publication | IEEE Xplore * Download References) This paper presents a novel model based on the Convolutional Neural Networks (CNNs) to handle such imbalanced and heterogeneous data and successfully identifies the semantic concepts in these multimedia systems. The paper also presents a system that can retrieve real-time visual data from heterogeneous cameras, and the run-time environment allows the analysis programs to process the data from thousands of cameras simultaneously. **Published in:** 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) Network cameras are stationary and can capture and send real-time visual data (image or video) continuously over the networks without human effort." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,shen2016relay,\cite{shen2016relay},Relay backpropagation for effective learning of deep convolutional neural networks,,,True,False,"Shen, Li and Lin, Zhouchen and Huang, Qingming",2016.0,,,,,Relay backpropagation for effective learning of deep convolutional neural networks,Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks,http://arxiv.org/pdf/1512.05830v2,"Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture. Our models will be available to the research community later." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,cui2019class,\cite{cui2019class},Class-Balanced Loss Based on Effective Number of Samples,http://arxiv.org/abs/1901.05555v1,"With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.",True,True,"Cui, Yin and Jia, Menglin and Lin, Tsung-Yi and Song, Yang and Belongie, Serge",2019.0,,,,,Class-Balanced Loss Based on Effective Number of Samples,[PDF] Class-Balanced Loss Based on Effective Number of Samples,https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf,"Class-balanced loss uses the effective number of samples, calculated by (1-βn)/(1-β), to re-weight loss, addressing long-tailed data distribution." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,huang2016learning,\cite{huang2016learning},Learning deep representation for imbalanced classification,,,True,False,"Huang, Chen and Li, Yining and Loy, Chen Change and Tang, Xiaoou",2016.0,,,,,Learning deep representation for imbalanced classification,[PDF] Learning Deep Representation for Imbalanced Classification,https://openaccess.thecvf.com/content_cvpr_2016/papers/Huang_Learning_Deep_Representation_CVPR_2016_paper.pdf,"In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representa- tion learning on class-" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,wang2017learning,\cite{wang2017learning},Learning to model the tail,,,True,False,"Wang, Yu-Xiong and Ramanan, Deva and Hebert, Martial",2017.0,,,,Advances in neural information processing systems,Learning to model the tail,Learning to Model the Tail,https://meta-learn.github.io/2017/papers/metalearn17_wang.pdf,"by YX Wang · Cited by 850 — We describe an approach to learning from long-tailed, imbalanced datasets that are prevalent in real-world settings. Here, the challenge is to learn" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,khan2017cost,\cite{khan2017cost},"Cost Sensitive Learning of Deep Feature Representations from Imbalanced Data",http://arxiv.org/abs/1508.03422v3,"Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes is abundant making them an over-represented majority, and data of other classes is scarce, making them an under-represented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this work, we propose a cost sensitive deep neural network which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multi-class problems without any modification. Moreover, as opposed to data level approaches, we do not alter the original data distribution which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification datasets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and cost sensitive classifiers demonstrate the superior performance of our proposed method.",True,True,"Khan, Salman H and Hayat, Munawar and Bennamoun, Mohammed and Sohel, Ferdous A and Togneri, Roberto",2017.0,,,,IEEE transactions on neural networks and learning systems,"Cost Sensitive Learning of Deep Feature Representations from Imbalanced Data",Cost Sensitive Learning of Deep Feature Representations from ...,https://arxiv.org/abs/1508.03422,"In this work, we propose a cost sensitive deep neural network which can automatically learn robust feature representations for both the majority and minority" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,shu2019meta,\cite{shu2019meta},Meta-weight-net: Learning an explicit mapping for sample weighting,,,True,False,"Shu, Jun and Xie, Qi and Yi, Lixuan and Zhao, Qian and Zhou, Sanping and Xu, Zongben and Meng, Deyu",2019.0,,,,Advances in neural information processing systems,Meta-weight-net: Learning an explicit mapping for sample weighting,xjtushujun/meta-weight-net,https://github.com/xjtushujun/meta-weight-net,NeurIPS'19: Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting (Official Pytorch implementation for noisy labels).See more Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,kang2019decoupling,\cite{kang2019decoupling},Decoupling Representation and Classifier for Long-Tailed Recognition,http://arxiv.org/abs/1910.09217v2,"The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.",True,True,"Kang, Bingyi and Xie, Saining and Rohrbach, Marcus and Yan, Zhicheng and Gordo, Albert and Feng, Jiashi and Kalantidis, Yannis",2019.0,,,,arXiv preprint arXiv:1910.09217,Decoupling Representation and Classifier for Long-Tailed Recognition,[PDF] DECOUPLING REPRESENTATION AND CLASSIFIER - OpenReview,https://openreview.net/pdf?id=r1gRTCVFvB,We evaluate the performance of various sampling and classifier training strategies for long-tailed recognition under both joint and decoupled learning schemes. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,zhou2020bbn,\cite{zhou2020bbn},"BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition",http://arxiv.org/abs/1912.02413v4,"Our work focuses on tackling the challenging but natural visual recognition task of long-tailed data distribution (i.e., a few classes occupy most of the data, while most classes have rarely few samples). In the literature, class re-balancing strategies (e.g., re-weighting and re-sampling) are the prominent and effective methods proposed to alleviate the extreme imbalance for dealing with long-tailed problems. In this paper, we firstly discover that these re-balancing methods achieving satisfactory recognition accuracy owe to that they could significantly promote the classifier learning of deep networks. However, at the same time, they will unexpectedly damage the representative ability of the learned deep features to some extent. Therefore, we propose a unified Bilateral-Branch Network (BBN) to take care of both representation learning and classifier learning simultaneously, where each branch does perform its own duty separately. In particular, our BBN model is further equipped with a novel cumulative learning strategy, which is designed to first learn the universal patterns and then pay attention to the tail data gradually. Extensive experiments on four benchmark datasets, including the large-scale iNaturalist ones, justify that the proposed BBN can significantly outperform state-of-the-art methods. Furthermore, validation experiments can demonstrate both our preliminary discovery and effectiveness of tailored designs in BBN for long-tailed problems. Our method won the first place in the iNaturalist 2019 large scale species classification competition, and our code is open-source and available at https://github.com/Megvii-Nanjing/BBN.",True,True,"Zhou, Boyan and Cui, Quan and Wei, Xiu-Shen and Chen, Zhao-Min",2020.0,,,,,"BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition",[PDF] BBN: Bilateral-Branch Network With Cumulative Learning for Long ...,https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhou_BBN_Bilateral-Branch_Network_With_Cumulative_Learning_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.pdf,"Our work focuses on tackling the challenging but natu- ral visual recognition task of long-tailed data distribution. (i.e., a few classes occupy most of the" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,zhang2023deep,\cite{zhang2023deep},Deep Long-Tailed Learning: A Survey,http://arxiv.org/abs/2110.04596v2,"Deep long-tailed learning, one of the most challenging problems in visual recognition, aims to train well-performing deep models from a large number of images that follow a long-tailed class distribution. In the last decade, deep learning has emerged as a powerful recognition model for learning high-quality image representations and has led to remarkable breakthroughs in generic visual recognition. However, long-tailed class imbalance, a common problem in practical visual recognition tasks, often limits the practicality of deep network based recognition models in real-world applications, since they can be easily biased towards dominant classes and perform poorly on tail classes. To address this problem, a large number of studies have been conducted in recent years, making promising progress in the field of deep long-tailed learning. Considering the rapid evolution of this field, this paper aims to provide a comprehensive survey on recent advances in deep long-tailed learning. To be specific, we group existing deep long-tailed learning studies into three main categories (i.e., class re-balancing, information augmentation and module improvement), and review these methods following this taxonomy in detail. Afterward, we empirically analyze several state-of-the-art methods by evaluating to what extent they address the issue of class imbalance via a newly proposed evaluation metric, i.e., relative accuracy. We conclude the survey by highlighting important applications of deep long-tailed learning and identifying several promising directions for future research.",True,True,"Zhang, Yifan and Kang, Bingyi and Hooi, Bryan and Yan, Shuicheng and Feng, Jiashi",2023.0,,,,IEEE Transactions on Pattern Analysis and Machine Intelligence,Deep Long-Tailed Learning: A Survey,[2110.04596] Deep Long-Tailed Learning: A Survey,https://arxiv.org/abs/2110.04596,by Y Zhang · 2021 · Cited by 819 — This paper aims to provide a comprehensive survey on recent advances in deep long-tailed learning. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,nam2023decoupled,\cite{nam2023decoupled},"Decoupled Training for Long-Tailed Classification With Stochastic Representations",http://arxiv.org/abs/2304.09426v1,"Decoupling representation learning and classifier learning has been shown to be effective in classification with long-tailed data. There are two main ingredients in constructing a decoupled learning scheme; 1) how to train the feature extractor for representation learning so that it provides generalizable representations and 2) how to re-train the classifier that constructs proper decision boundaries by handling class imbalances in long-tailed data. In this work, we first apply Stochastic Weight Averaging (SWA), an optimization technique for improving the generalization of deep neural networks, to obtain better generalizing feature extractors for long-tailed classification. We then propose a novel classifier re-training algorithm based on stochastic representation obtained from the SWA-Gaussian, a Gaussian perturbed SWA, and a self-distillation strategy that can harness the diverse stochastic representations based on uncertainty estimates to build more robust classifiers. Extensive experiments on CIFAR10/100-LT, ImageNet-LT, and iNaturalist-2018 benchmarks show that our proposed method improves upon previous methods both in terms of prediction accuracy and uncertainty estimation.",True,True,"Nam, Giung and Jang, Sunguk and Lee, Juho",2023.0,,,,arXiv preprint arXiv:2304.09426,"Decoupled Training for Long-Tailed Classification With Stochastic Representations",Decoupled Training for Long-Tailed Classification With ...,https://arxiv.org/abs/2304.09426,by G Nam · 2023 · Cited by 20 — Abstract:Decoupling representation learning and classifier learning has been shown to be effective in classification with long-tailed data. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,ren2020balanced,\cite{ren2020balanced},Balanced meta-softmax for long-tailed visual recognition,,,True,False,"Ren, Jiawei and Yu, Cunjun and Ma, Xiao and Zhao, Haiyu and Yi, Shuai and others",2020.0,,,,Advances in neural information processing systems,Balanced meta-softmax for long-tailed visual recognition,Balanced meta-softmax for long-tailed visual recognition,https://dl.acm.org/doi/10.5555/3495724.3496075,"In our experiments, we demonstrate that Balanced Meta-Softmax outperforms state-of-the-art long-tailed classification solutions on both visual recognition and" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,menon2020long,\cite{menon2020long},Long-tail learning via logit adjustment,http://arxiv.org/abs/2007.07314v2,"Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels are associated with only a few samples. This poses a challenge for generalisation on such labels, and also makes na\""ive learning biased towards dominant labels. In this paper, we present two simple modifications of standard softmax cross-entropy training to cope with these challenges. Our techniques revisit the classic idea of logit adjustment based on the label frequencies, either applied post-hoc to a trained model, or enforced in the loss during training. Such adjustment encourages a large relative margin between logits of rare versus dominant labels. These techniques unify and generalise several recent proposals in the literature, while possessing firmer statistical grounding and empirical performance.",True,True,"Menon, Aditya Krishna and Jayasumana, Sadeep and Rawat, Ankit Singh and Jain, Himanshu and Veit, Andreas and Kumar, Sanjiv",2020.0,,,,arXiv preprint arXiv:2007.07314,Long-tail learning via logit adjustment,Long-tail learning via logit adjustment - OpenReview,https://openreview.net/forum?id=37nvvqkCo5,This paper provides a statistical framework for long-tail learning by revisiting the idea of logit adjustment based on the label frequencies. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,cui2021parametric,\cite{cui2021parametric},Parametric Contrastive Learning,http://arxiv.org/abs/2107.12028v2,"In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones, e.g., our ResNet-200 achieves 81.8% top-1 accuracy. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.",True,True,"Cui, Jiequan and Zhong, Zhisheng and Liu, Shu and Yu, Bei and Jia, Jiaya",2021.0,,,,,Parametric Contrastive Learning,Parametric Contrastive Learning,http://arxiv.org/pdf/2107.12028v2,"In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones, e.g., our ResNet-200 achieves 81.8% top-1 accuracy. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,cui2023generalized,\cite{cui2023generalized},Generalized Parametric Contrastive Learning,http://arxiv.org/abs/2209.12400v2,"In this paper, we propose the Generalized Parametric Contrastive Learning (GPaCo/PaCo) which works well on both imbalanced and balanced data. Based on theoretical analysis, we observe that supervised contrastive loss tends to bias high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our GPaCo/PaCo loss under a balanced setting. Our analysis demonstrates that GPaCo/PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed benchmarks manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models from CNNs to vision transformers trained with GPaCo loss show better generalization performance and stronger robustness compared with MAE models. Moreover, GPaCo can be applied to the semantic segmentation task and obvious improvements are observed on the 4 most popular benchmarks. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.",True,True,"Cui, Jiequan and Zhong, Zhisheng and Tian, Zhuotao and Liu, Shu and Yu, Bei and Jia, Jiaya",2023.0,,,,IEEE Transactions on Pattern Analysis and Machine Intelligence,Generalized Parametric Contrastive Learning,Generalized Parametric Contrastive Learning,http://arxiv.org/pdf/2209.12400v2,"In this paper, we propose the Generalized Parametric Contrastive Learning (GPaCo/PaCo) which works well on both imbalanced and balanced data. Based on theoretical analysis, we observe that supervised contrastive loss tends to bias high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our GPaCo/PaCo loss under a balanced setting. Our analysis demonstrates that GPaCo/PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed benchmarks manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models from CNNs to vision transformers trained with GPaCo loss show better generalization performance and stronger robustness compared with MAE models. Moreover, GPaCo can be applied to the semantic segmentation task and obvious improvements are observed on the 4 most popular benchmarks. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,zhu2022balanced,\cite{zhu2022balanced},Balanced Contrastive Learning for Long-Tailed Visual Recognition,http://arxiv.org/abs/2207.09052v3,"Real-world data typically follow a long-tailed distribution, where a few majority categories occupy most of the data while most minority categories contain a limited number of samples. Classification models minimizing cross-entropy struggle to represent and classify the tail classes. Although the problem of learning unbiased classifiers has been well studied, methods for representing imbalanced data are under-explored. In this paper, we focus on representation learning for imbalanced data. Recently, supervised contrastive learning has shown promising performance on balanced data recently. However, through our theoretical analysis, we find that for long-tailed data, it fails to form a regular simplex which is an ideal geometric configuration for representation learning. To correct the optimization behavior of SCL and further improve the performance of long-tailed visual recognition, we propose a novel loss for balanced contrastive learning (BCL). Compared with SCL, we have two improvements in BCL: class-averaging, which balances the gradient contribution of negative classes; class-complement, which allows all classes to appear in every mini-batch. The proposed balanced contrastive learning (BCL) method satisfies the condition of forming a regular simplex and assists the optimization of cross-entropy. Equipped with BCL, the proposed two-branch framework can obtain a stronger feature representation and achieve competitive performance on long-tailed benchmark datasets such as CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iNaturalist2018. Our code is available at https://github.com/FlamieZhu/BCL .",True,True,"Zhu, Jianggang and Wang, Zheng and Chen, Jingjing and Chen, Yi-Ping Phoebe and Jiang, Yu-Gang",2022.0,,,,,Balanced Contrastive Learning for Long-Tailed Visual Recognition,Balanced Contrastive Learning for Long-Tailed Visual Recognition,https://arxiv.org/abs/2207.09052,The proposed balanced contrastive learning (BCL) method satisfies the condition of forming a regular simplex and assists the optimization of cross-entropy. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,suh2023long,\cite{suh2023long},"Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels",http://arxiv.org/abs/2305.01160v3,"Although contrastive learning methods have shown prevailing performance on a variety of representation learning tasks, they encounter difficulty when the training dataset is long-tailed. Many researchers have combined contrastive learning and a logit adjustment technique to address this problem, but the combinations are done ad-hoc and a theoretical background has not yet been provided. The goal of this paper is to provide the background and further improve the performance. First, we show that the fundamental reason contrastive learning methods struggle with long-tailed tasks is that they try to maximize the mutual information maximization between latent features and input data. As ground-truth labels are not considered in the maximization, they are not able to address imbalances between class labels. Rather, we interpret the long-tailed recognition task as a mutual information maximization between latent features and ground-truth labels. This approach integrates contrastive learning and logit adjustment seamlessly to derive a loss function that shows state-of-the-art performance on long-tailed recognition benchmarks. It also demonstrates its efficacy in image segmentation tasks, verifying its versatility beyond image classification.",True,True,"Suh, Min-Kook and Seo, Seung-Woo",2023.0,,,,arXiv preprint arXiv:2305.01160,"Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels",Long-Tailed Recognition by Mutual Information ...,https://openreview.net/pdf?id=KqNX6VOqnJ,"by MK Suh · Cited by 27 — Rather, we interpret the long-tailed recognition task as a mutual information maximization between latent features and ground-truth labels. This approach." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,zhu2024generalized,\cite{zhu2024generalized},"Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models",http://arxiv.org/abs/2310.08106v3,"Foundation models like CLIP allow zero-shot transfer on various tasks without additional training data. Yet, the zero-shot performance is less competitive than a fully supervised one. Thus, to enhance the performance, fine-tuning and ensembling are also commonly adopted to better fit the downstream tasks. However, we argue that such prior work has overlooked the inherent biases in foundation models. Due to the highly imbalanced Web-scale training set, these foundation models are inevitably skewed toward frequent semantics, and thus the subsequent fine-tuning or ensembling is still biased. In this study, we systematically examine the biases in foundation models and demonstrate the efficacy of our proposed Generalized Logit Adjustment (GLA) method. Note that bias estimation in foundation models is challenging, as most pre-train data cannot be explicitly accessed like in traditional long-tailed classification tasks. To this end, GLA has an optimization-based bias estimation approach for debiasing foundation models. As our work resolves a fundamental flaw in the pre-training, the proposed GLA demonstrates significant improvements across a diverse range of tasks: it achieves 1.5 pp accuracy gains on ImageNet, an large average improvement (1.4-4.6 pp) on 11 few-shot datasets, 2.4 pp gains on long-tailed classification. Codes are in \url{https://github.com/BeierZhu/GLA}.",True,True,"Zhu, Beier and Tang, Kaihua and Sun, Qianru and Zhang, Hanwang",2024.0,,,,Advances in Neural Information Processing Systems,"Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models",Calibrating Fine-tuned Models by Removing Label Bias in ... - arXiv,https://arxiv.org/abs/2310.08106,Missing: 04/08/2025 Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,he2020momentum,\cite{he2020momentum},Momentum Contrast for Unsupervised Visual Representation Learning,http://arxiv.org/abs/1911.05722v3,"We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.",True,True,"He, Kaiming and Fan, Haoqi and Wu, Yuxin and Xie, Saining and Girshick, Ross",2020.0,,,,,Momentum Contrast for Unsupervised Visual Representation Learning,Momentum Contrast for Unsupervised Visual Representation Learning,http://arxiv.org/pdf/1911.05722v3,"We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,chen2020simple,\cite{chen2020simple},A Simple Framework for Contrastive Learning of Visual Representations,http://arxiv.org/abs/2002.05709v3,"This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.",True,True,"Chen, Ting and Kornblith, Simon and Norouzi, Mohammad and Hinton, Geoffrey",2020.0,,,,,A Simple Framework for Contrastive Learning of Visual Representations,A Simple Framework for Contrastive Learning of Visual Representations,http://arxiv.org/pdf/2002.05709v3,"This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,grill2020bootstrap,\cite{grill2020bootstrap},Bootstrap your own latent: A new approach to self-supervised Learning,http://arxiv.org/abs/2006.07733v3,"We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches $74.3\%$ top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and $79.6\%$ with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub.",True,True,"Grill, Jean-Bastien and Strub, Florian and Altch{\'e}, Florent and Tallec, Corentin and Richemond, Pierre and Buchatskaya, Elena and Doersch, Carl and Avila Pires, Bernardo and Guo, Zhaohan and Gheshlaghi Azar, Mohammad and others",2020.0,,,,Advances in neural information processing systems,Bootstrap your own latent: A new approach to self-supervised Learning,[PDF] Bootstrap Your Own Latent A New Approach to Self-Supervised ...,https://papers.nips.cc/paper/2020/file/f3ada80d5c4ee70142b17b8192b2958e-Paper.pdf,"Richemond∗ ,1,2 Elena Buchatskaya1 , Carl Doersch1 , Bernardo Avila Pires1 , Zhaohan Daniel Guo1 Mohammad Gheshlaghi Azar1, Bilal Piot1, Koray Kavukcuoglu1 , Rémi Munos1 , Michal Valko1 1DeepMind 2Imperial College [jbgrill,fstrub,altche,corentint,richemond]@google.com Abstract We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. In this paper, we introduce Bootstrap Your Own Latent (BYOL), a new algorithm for self-supervised learning of image representations. Starting from an augmented view of an image, BYOL trains its online network to predict the target network’s representation of another augmented view of the same image. Our contributions are: (i) We introduce BYOL, a self-supervised representation learning method (Section 3) which achieves state-of-the-art results under the linear evaluation protocol on ImageNet without using negative pairs." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,chen2021exploring,\cite{chen2021exploring},Exploring Simple Siamese Representation Learning,http://arxiv.org/abs/2011.10566v1,"Siamese networks have become a common structure in various recent models for unsupervised visual representation learning. These models maximize the similarity between two augmentations of one image, subject to certain conditions for avoiding collapsing solutions. In this paper, we report surprising empirical results that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Our experiments show that collapsing solutions do exist for the loss and structure, but a stop-gradient operation plays an essential role in preventing collapsing. We provide a hypothesis on the implication of stop-gradient, and further show proof-of-concept experiments verifying it. Our ""SimSiam"" method achieves competitive results on ImageNet and downstream tasks. We hope this simple baseline will motivate people to rethink the roles of Siamese architectures for unsupervised representation learning. Code will be made available.",True,True,"Chen, Xinlei and He, Kaiming",2021.0,,,,,Exploring Simple Siamese Representation Learning,Exploring Simple Siamese Representation Learning,http://arxiv.org/pdf/2011.10566v1,"Siamese networks have become a common structure in various recent models for unsupervised visual representation learning. These models maximize the similarity between two augmentations of one image, subject to certain conditions for avoiding collapsing solutions. In this paper, we report surprising empirical results that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Our experiments show that collapsing solutions do exist for the loss and structure, but a stop-gradient operation plays an essential role in preventing collapsing. We provide a hypothesis on the implication of stop-gradient, and further show proof-of-concept experiments verifying it. Our ""SimSiam"" method achieves competitive results on ImageNet and downstream tasks. We hope this simple baseline will motivate people to rethink the roles of Siamese architectures for unsupervised representation learning. Code will be made available." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,khosla2020supervised,\cite{khosla2020supervised},Supervised Contrastive Learning,http://arxiv.org/abs/2004.11362v5,"Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at https://t.ly/supcon.",True,True,"Khosla, Prannay and Teterwak, Piotr and Wang, Chen and Sarna, Aaron and Tian, Yonglong and Isola, Phillip and Maschinot, Aaron and Liu, Ce and Krishnan, Dilip",2020.0,,,,Advances in neural information processing systems,Supervised Contrastive Learning,[PDF] Supervised Contrastive Learning,https://proceedings.neurips.cc/paper/2020/file/d89a66c7c80a29b1bdbab0f2a1a94af8-Paper.pdf,"Supervised contrastive learning uses label information to pull together samples of the same class, unlike self-supervised which uses data augmentations." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,du2024probabilistic,\cite{du2024probabilistic},Probabilistic Contrastive Learning for Long-Tailed Visual Recognition,http://arxiv.org/abs/2403.06726v2,"Long-tailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples. Such imbalance issue considerably impairs the performance of standard supervised learning algorithms, which are mainly designed for balanced training sets. Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance. However, the performance of supervised contrastive learning is plagued by an inherent challenge: it necessitates sufficiently large batches of training data to construct contrastive pairs that cover all categories, yet this requirement is difficult to meet in the context of class-imbalanced data. To overcome this obstacle, we propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space, and samples contrastive pairs accordingly. In fact, estimating the distributions of all classes using features in a small batch, particularly for imbalanced data, is not feasible. Our key idea is to introduce a reasonable and simple assumption that the normalized features in contrastive learning follow a mixture of von Mises-Fisher (vMF) distributions on unit space, which brings two-fold benefits. First, the distribution parameters can be estimated using only the first sample moment, which can be efficiently computed in an online manner across different batches. Second, based on the estimated distribution, the vMF distribution allows us to sample an infinite number of contrastive pairs and derive a closed form of the expected contrastive loss for efficient optimization. Our code is available at https://github.com/LeapLabTHU/ProCo.",True,True,"Du, Chaoqun and Wang, Yulin and Song, Shiji and Huang, Gao",2024.0,,,,IEEE Transactions on Pattern Analysis and Machine Intelligence,Probabilistic Contrastive Learning for Long-Tailed Visual Recognition,LeapLabTHU/ProCo: [TPAMI 2024] Probabilistic ...,https://github.com/LeapLabTHU/ProCo,"GitHub - LeapLabTHU/ProCo: [TPAMI 2024] Probabilistic Contrastive Learning for Long-Tailed Visual Recognition * GitHub Advanced Security Enterprise-grade security features Search code, repositories, users, issues, pull requests... Reload to refresh your session.You signed out in another tab or window. ProCo This repository contains the Pytorch implementation of the T-PAMI 2024 paper Probabilistic Contrastive Learning for Long-Tailed Visual Recognition. Image 6: ProCo We proposed a novel probabilistic contrastive (ProCo) learning algorithm for long-tailed distribution. | ProCo | CIFAR100-LT | 100 | 200 | 52.8 | Tsinghua Cloud/Google Drive | | ProCo | CIFAR100-LT | 100 | 400 | 54.2 | Tsinghua Cloud/Google Drive | bash sh/ProCo_CIFAR.sh ${dataset} ${imbalance_factor} ${epochs} bash sh/ProCo_ImageNetLT_X50_90epochs.sh Our code is based on the BCL (Balanced Contrastive Learning for Long-Tailed Visual Recognition) repository." Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,zhang2022fairness,\cite{zhang2022fairness},Fairness-aware contrastive learning with partially annotated sensitive attributes,,,True,False,"Zhang, Fengda and Kuang, Kun and Chen, Long and Liu, Yuxuan and Wu, Chao and Xiao, Jun",2022.0,,,,,Fairness-aware contrastive learning with partially annotated sensitive attributes,Fairness-aware Contrastive Learning with Partially Annotated...,https://openreview.net/forum?id=woa783QMul,The paper proposes a variation of contrastive learning that learns fair representation with partially annotated sensitive attribute labels. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,hou2023subclass,\cite{hou2023subclass},Subclass-balancing Contrastive Learning for Long-tailed Recognition,http://arxiv.org/abs/2306.15925v2,"Long-tailed recognition with imbalanced class distribution naturally emerges in practical machine learning applications. Existing methods such as data reweighing, resampling, and supervised contrastive learning enforce the class balance with a price of introducing imbalance between instances of head class and tail class, which may ignore the underlying rich semantic substructures of the former and exaggerate the biases in the latter. We overcome these drawbacks by a novel ``subclass-balancing contrastive learning (SBCL)'' approach that clusters each head class into multiple subclasses of similar sizes as the tail classes and enforce representations to capture the two-layer class hierarchy between the original classes and their subclasses. Since the clustering is conducted in the representation space and updated during the course of training, the subclass labels preserve the semantic substructures of head classes. Meanwhile, it does not overemphasize tail class samples, so each individual instance contribute to the representation learning equally. Hence, our method achieves both the instance- and subclass-balance, while the original class labels are also learned through contrastive learning among subclasses from different classes. We evaluate SBCL over a list of long-tailed benchmark datasets and it achieves the state-of-the-art performance. In addition, we present extensive analyses and ablation studies of SBCL to verify its advantages.",True,True,"Hou, Chengkai and Zhang, Jieyu and Wang, Haonan and Zhou, Tianyi",2023.0,,,,,Subclass-balancing Contrastive Learning for Long-tailed Recognition,Subclass-balancing Contrastive Learning for Long-tailed Recognition,https://arxiv.org/abs/2306.15925,A novel subclass-balancing contrastive learning (SBCL) approach that clusters each head class into multiple subclasses of similar sizes as the tail classes. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,kang2020exploring,\cite{kang2020exploring},Exploring balanced feature spaces for representation learning,,,True,False,"Kang, Bingyi and Li, Yu and Xie, Sa and Yuan, Zehuan and Feng, Jiashi",2020.0,,,,,Exploring balanced feature spaces for representation learning,[PDF] EXPLORING BALANCED FEATURE SPACES FOR REP,https://openreview.net/pdf?id=OqtLIabPTit,(4) We develop a new method to explicitly pursue balanced feature spaces for representation learning and it outperforms the popular cross-entropy and Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,li2022targeted,\cite{li2022targeted},Targeted Supervised Contrastive Learning for Long-Tailed Recognition,http://arxiv.org/abs/2111.13998v2,"Real-world data often exhibits long tail distributions with heavy class imbalance, where the majority classes can dominate the training process and alter the decision boundaries of the minority classes. Recently, researchers have investigated the potential of supervised contrastive learning for long-tailed recognition, and demonstrated that it provides a strong performance gain. In this paper, we show that while supervised contrastive learning can help improve performance, past baselines suffer from poor uniformity brought in by imbalanced data distribution. This poor uniformity manifests in samples from the minority class having poor separability in the feature space. To address this problem, we propose targeted supervised contrastive learning (TSC), which improves the uniformity of the feature distribution on the hypersphere. TSC first generates a set of targets uniformly distributed on a hypersphere. It then makes the features of different classes converge to these distinct and uniformly distributed targets during training. This forces all classes, including minority classes, to maintain a uniform distribution in the feature space, improves class boundaries, and provides better generalization even in the presence of long-tail data. Experiments on multiple datasets show that TSC achieves state-of-the-art performance on long-tailed recognition tasks.",True,True,"Li, Tianhong and Cao, Peng and Yuan, Yuan and Fan, Lijie and Yang, Yuzhe and Feris, Rogerio S and Indyk, Piotr and Katabi, Dina",2022.0,,,,,Targeted Supervised Contrastive Learning for Long-Tailed Recognition,[PDF] Targeted Supervised Contrastive Learning for Long-Tailed ...,https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Targeted_Supervised_Contrastive_Learning_for_Long-Tailed_Recognition_CVPR_2022_paper.pdf,"TSC is especially effec- tive on long-tailed recognition tasks, since for traditional methods based on supervised contrastive loss, classes with fewer training" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,chen2020big,\cite{chen2020big},Big Self-Supervised Models are Strong Semi-Supervised Learners,http://arxiv.org/abs/2006.10029v2,"One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels ($\le$13 labeled images per class) using ResNet-50, a $10\times$ improvement in label efficiency over the previous state-of-the-art. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels.",True,True,"Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey E",2020.0,,,,Advances in neural information processing systems,Big Self-Supervised Models are Strong Semi-Supervised Learners,[2006.10029] Big Self-Supervised Models are Strong Semi ...,https://arxiv.org/abs/2006.10029,by T Chen · 2020 · Cited by 2883 — We show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks. Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,chen2021empirical,\cite{chen2021empirical},An Empirical Study of Training Self-Supervised Vision Transformers,http://arxiv.org/abs/2104.02057v4,"This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: self-supervised learning for Vision Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other self-supervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.",True,True,"Chen, Xinlei and Xie, Saining and He, Kaiming",2021.0,,,,,An Empirical Study of Training Self-Supervised Vision Transformers,[PDF] An Empirical Study of Training Self-Supervised Vision Transformers,https://openaccess.thecvf.com/content/ICCV2021/papers/Chen_An_Empirical_Study_of_Training_Self-Supervised_Vision_Transformers_ICCV_2021_paper.pdf,"In summary, we believe that the evidence, challenges, and open questions in this study are worth knowing, if self- supervised Transformers will close the gap in" Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,caron2020unsupervised,\cite{caron2020unsupervised},"Unsupervised Learning of Visual Features by Contrasting Cluster Assignments",http://arxiv.org/abs/2006.09882v5,"Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a swapped prediction mechanism where we predict the cluster assignment of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.",True,True,"Caron, Mathilde and Misra, Ishan and Mairal, Julien and Goyal, Priya and Bojanowski, Piotr and Joulin, Armand",2020.0,,,,Advances in neural information processing systems,"Unsupervised Learning of Visual Features by Contrasting Cluster Assignments",Unsupervised Learning of Visual Features by Contrasting ...,https://arxiv.org/abs/2006.09882,"Authors:Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin View a PDF of the paper titled Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, by Mathilde Caron and 5 other authors Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. View a PDF of the paper titled Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, by Mathilde Caron and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle " Aligned Contrastive Loss for Long-Tailed Recognition,2506.01071v1,fort2021drawing,\cite{fort2021drawing},"Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error",http://arxiv.org/abs/2105.13343v2,"In computer vision, it is standard practice to draw a single sample from the data augmentation procedure for each unique image in the mini-batch. However recent work has suggested drawing multiple samples can achieve higher test accuracies. In this work, we provide a detailed empirical evaluation of how the number of augmentation samples per unique image influences model performance on held out data when training deep ResNets. We demonstrate drawing multiple samples per image consistently enhances the test accuracy achieved for both small and large batch training. Crucially, this benefit arises even if different numbers of augmentations per image perform the same number of parameter updates and gradient evaluations (requiring the same total compute). Although prior work has found variance in the gradient estimate arising from subsampling the dataset has an implicit regularization benefit, our experiments suggest variance which arises from the data augmentation process harms generalization. We apply these insights to the highly performant NFNet-F5, achieving 86.8$\%$ top-1 w/o extra data on ImageNet.",True,True,"Fort, Stanislav and Brock, Andrew and Pascanu, Razvan and De, Soham and Smith, Samuel L",2021.0,,,,arXiv preprint arXiv:2105.13343,"Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error",Drawing Multiple Augmentation Samples Per Image During Training ...,https://www.semanticscholar.org/paper/efcafa65a6e69fa52ceba41d7b6356c17c241edc,"It is demonstrated drawing multiple samples per image consistently enhances the test accuracy achieved for both small and large batch training," "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,cao2021video,\cite{cao2021video},Video Super-Resolution Transformer,http://arxiv.org/abs/2106.06847v3,"Video super-resolution (VSR), with the aim to restore a high-resolution video from its corresponding low-resolution version, is a spatial-temporal sequence prediction problem. Recently, Transformer has been gaining popularity due to its parallel computing ability for sequence-to-sequence modeling. Thus, it seems to be straightforward to apply the vision Transformer to solve VSR. However, the typical block design of Transformer with a fully connected self-attention layer and a token-wise feed-forward layer does not fit well for VSR due to the following two reasons. First, the fully connected self-attention layer neglects to exploit the data locality because this layer relies on linear layers to compute attention maps. Second, the token-wise feed-forward layer lacks the feature alignment which is important for VSR since this layer independently processes each of the input token embeddings without any interaction among them. In this paper, we make the first attempt to adapt Transformer for VSR. Specifically, to tackle the first issue, we present a spatial-temporal convolutional self-attention layer with a theoretical understanding to exploit the locality information. For the second issue, we design a bidirectional optical flow-based feed-forward layer to discover the correlations across different video frames and also align features. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our proposed method. The code will be available at https://github.com/caojiezhang/VSR-Transformer.",True,True,"Cao, Jiezhang and Li, Yawei and Zhang, Kai and Van Gool, Luc",2021.0,,,,arXiv preprint arXiv:2106.06847,Video Super-Resolution Transformer,[2106.06847] Video Super-Resolution Transformer - arXiv,https://arxiv.org/abs/2106.06847,"Abstract:Video super-resolution (VSR), with the aim to restore a high-resolution video from its corresponding low-resolution version," "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,chan2021basicvsr,\cite{chan2021basicvsr},"BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond",http://arxiv.org/abs/2012.02181v2,"Video super-resolution (VSR) approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. In this study, we wish to untangle the knots and reconsider some most essential components for VSR guided by four basic functionalities, i.e., Propagation, Alignment, Aggregation, and Upsampling. By reusing some existing components added with minimal redesigns, we show a succinct pipeline, BasicVSR, that achieves appealing improvements in terms of speed and restoration quality in comparison to many state-of-the-art algorithms. We conduct systematic analysis to explain how such gain can be obtained and discuss the pitfalls. We further show the extensibility of BasicVSR by presenting an information-refill mechanism and a coupled propagation scheme to facilitate information aggregation. The BasicVSR and its extension, IconVSR, can serve as strong baselines for future VSR approaches.",True,True,"Chan, Kelvin CK and Wang, Xintao and Yu, Ke and Dong, Chao and Loy, Chen Change",2021.0,,,,,"BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond",[PDF] BasicVSR: The Search for Essential Components in Video Super ...,https://openaccess.thecvf.com/content/CVPR2021/papers/Chan_BasicVSR_The_Search_for_Essential_Components_in_Video_Super-Resolution_and_CVPR_2021_paper.pdf,"BasicVSR is a video super-resolution pipeline using Propagation, Alignment, Aggregation, and Upsampling, with bidirectional propagation and optical flow for" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,chan2022basicvsr++,\cite{chan2022basicvsr++},Basicvsr++: Improving video super-resolution with enhanced propagation and alignment,,,True,False,"Chan, Kelvin CK and Zhou, Shangchen and Xu, Xiangyu and Loy, Chen Change",2022.0,,,,,Basicvsr++: Improving video super-resolution with enhanced propagation and alignment,"ckkelvinchan/BasicVSR_PlusPlus: Official repository of "" ...",https://github.com/ckkelvinchan/BasicVSR_PlusPlus,"GitHub - ckkelvinchan/BasicVSR_PlusPlus: Official repository of ""BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment"" | .dev_scripts/github | .dev_scripts/github | [Feature] Add relevant code (#6) | Apr 18, 2022 | | .github/workflows | .github/workflows | [Feature] Add relevant code (#6) | Apr 18, 2022 | | configs | configs | [Feature] Add relevant code (#6) | Apr 18, 2022 | | demo | demo | [Feature] Add relevant code (#6) | Apr 18, 2022 | | mmedit | mmedit | [Feature] Add relevant code (#6) | Apr 18, 2022 | | .pre-commit-config.yaml | .pre-commit-config.yaml | [Feature] Add relevant code (#6) | Apr 18, 2022 | | LICENSE | LICENSE | [Feature] Add relevant code (#6) | Apr 18, 2022 |" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,isobe2020video,\cite{isobe2020video},Video Super-Resolution with Recurrent Structure-Detail Network,http://arxiv.org/abs/2008.00455v1,"Most video super-resolution methods super-resolve a single reference frame with the help of neighboring frames in a temporal sliding window. They are less efficient compared to the recurrent-based methods. In this work, we propose a novel recurrent video super-resolution method which is both effective and efficient in exploiting previous frames to super-resolve the current frame. It divides the input into structure and detail components which are fed to a recurrent unit composed of several proposed two-stream structure-detail blocks. In addition, a hidden state adaptation module that allows the current frame to selectively use information from hidden state is introduced to enhance its robustness to appearance change and error accumulation. Extensive ablation study validate the effectiveness of the proposed modules. Experiments on several benchmark datasets demonstrate the superior performance of the proposed method compared to state-of-the-art methods on video super-resolution.",True,True,"Isobe, Takashi and Jia, Xu and Gu, Shuhang and Li, Songjiang and Wang, Shengjin and Tian, Qi",2020.0,,,,,Video Super-Resolution with Recurrent Structure-Detail Network,Video Super-Resolution with Recurrent Structure-Detail Network,https://arxiv.org/abs/2008.00455,We propose a novel recurrent video super-resolution method which is both effective and efficient in exploiting previous frames to super-resolve the current "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,isobe2020video2,\cite{isobe2020video2},Video Super-resolution with Temporal Group Attention,http://arxiv.org/abs/2007.10595v1,"Video super-resolution, which aims at producing a high-resolution video from its corresponding low-resolution version, has recently drawn increasing attention. In this work, we propose a novel method that can effectively incorporate temporal information in a hierarchical way. The input sequence is divided into several groups, with each one corresponding to a kind of frame rate. These groups provide complementary information to recover missing details in the reference frame, which is further integrated with an attention module and a deep intra-group fusion module. In addition, a fast spatial alignment is proposed to handle videos with large motion. Extensive results demonstrate the capability of the proposed model in handling videos with various motion. It achieves favorable performance against state-of-the-art methods on several benchmark datasets.",True,True,"Isobe, Takashi and Li, Songjiang and Jia, Xu and Yuan, Shanxin and Slabaugh, Gregory and Xu, Chunjing and Li, Ya-Li and Wang, Shengjin and Tian, Qi",2020.0,,,,,Video Super-resolution with Temporal Group Attention,[PDF] Video Super-Resolution With Temporal Group Attention,https://openaccess.thecvf.com/content_CVPR_2020/papers/Isobe_Video_Super-Resolution_With_Temporal_Group_Attention_CVPR_2020_paper.pdf,"For video super- resolution, both spatial information across positions and temporal information across frames can be used to enhance details for an LR frame." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,isobe2020revisiting,\cite{isobe2020revisiting},Revisiting temporal modeling for video super-resolution,,,True,False,"Isobe, Takashi and Zhu, Fang and Jia, Xu and Wang, Shengjin",2020.0,,,,arXiv preprint arXiv:2008.05765,Revisiting temporal modeling for video super-resolution,Revisiting Temporal Modeling for Video Super-resolution,https://www.bmvc2020-conference.com/assets/papers/0033.pdf,"In this work, we carefully study and compare three temporal modeling methods (2D CNN with early fusion, 3D CNN with slow fusion and Recurrent Neural Network)" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,jo2018deep,\cite{jo2018deep},Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,,,True,False,"Jo, Younghyun and Oh, Seoung Wug and Kang, Jaeyeon and Kim, Seon Joo",2018.0,,,,,Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,yhjo09/VSR-DUF,https://github.com/yhjo09/VSR-DUF,Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation. This is a tensorflow implementation of the paper. "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,liang2024vrt,\cite{liang2024vrt},VRT: A Video Restoration Transformer,http://arxiv.org/abs/2201.12288v2,"Video restoration (e.g., video super-resolution) aims to restore high-quality frames from low-quality frames. Different from single image restoration, video restoration generally requires to utilize temporal information from multiple adjacent but usually misaligned video frames. Existing deep methods generally tackle with this by exploiting a sliding window strategy or a recurrent architecture, which either is restricted by frame-by-frame restoration or lacks long-range modelling ability. In this paper, we propose a Video Restoration Transformer (VRT) with parallel frame prediction and long-range temporal dependency modelling abilities. More specifically, VRT is composed of multiple scales, each of which consists of two kinds of modules: temporal mutual self attention (TMSA) and parallel warping. TMSA divides the video into small clips, on which mutual attention is applied for joint motion estimation, feature alignment and feature fusion, while self attention is used for feature extraction. To enable cross-clip interactions, the video sequence is shifted for every other layer. Besides, parallel warping is used to further fuse information from neighboring frames by parallel feature warping. Experimental results on five tasks, including video super-resolution, video deblurring, video denoising, video frame interpolation and space-time video super-resolution, demonstrate that VRT outperforms the state-of-the-art methods by large margins ($\textbf{up to 2.16dB}$) on fourteen benchmark datasets.",True,True,"Liang, Jingyun and Cao, Jiezhang and Fan, Yuchen and Zhang, Kai and Ranjan, Rakesh and Li, Yawei and Timofte, Radu and Van Gool, Luc",2024.0,,,,IEEE Transactions on Image Processing,VRT: A Video Restoration Transformer,VRT: A Video Restoration Transformer,http://arxiv.org/pdf/2201.12288v2,"Video restoration (e.g., video super-resolution) aims to restore high-quality frames from low-quality frames. Different from single image restoration, video restoration generally requires to utilize temporal information from multiple adjacent but usually misaligned video frames. Existing deep methods generally tackle with this by exploiting a sliding window strategy or a recurrent architecture, which either is restricted by frame-by-frame restoration or lacks long-range modelling ability. In this paper, we propose a Video Restoration Transformer (VRT) with parallel frame prediction and long-range temporal dependency modelling abilities. More specifically, VRT is composed of multiple scales, each of which consists of two kinds of modules: temporal mutual self attention (TMSA) and parallel warping. TMSA divides the video into small clips, on which mutual attention is applied for joint motion estimation, feature alignment and feature fusion, while self attention is used for feature extraction. To enable cross-clip interactions, the video sequence is shifted for every other layer. Besides, parallel warping is used to further fuse information from neighboring frames by parallel feature warping. Experimental results on five tasks, including video super-resolution, video deblurring, video denoising, video frame interpolation and space-time video super-resolution, demonstrate that VRT outperforms the state-of-the-art methods by large margins ($\textbf{up to 2.16dB}$) on fourteen benchmark datasets." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,liang2022recurrent,\cite{liang2022recurrent},Recurrent Video Restoration Transformer with Guided Deformable Attention,http://arxiv.org/abs/2206.02146v3,"Video restoration aims at restoring multiple high-quality frames from multiple low-quality frames. Existing video restoration methods generally fall into two extreme cases, i.e., they either restore all frames in parallel or restore the video frame by frame in a recurrent way, which would result in different merits and drawbacks. Typically, the former has the advantage of temporal information fusion. However, it suffers from large model size and intensive memory consumption; the latter has a relatively small model size as it shares parameters across frames; however, it lacks long-range dependency modeling ability and parallelizability. In this paper, we attempt to integrate the advantages of the two cases by proposing a recurrent video restoration transformer, namely RVRT. RVRT processes local neighboring frames in parallel within a globally recurrent framework which can achieve a good trade-off between model size, effectiveness, and efficiency. Specifically, RVRT divides the video into multiple clips and uses the previously inferred clip feature to estimate the subsequent clip feature. Within each clip, different frame features are jointly updated with implicit feature aggregation. Across different clips, the guided deformable attention is designed for clip-to-clip alignment, which predicts multiple relevant locations from the whole inferred clip and aggregates their features by the attention mechanism. Extensive experiments on video super-resolution, deblurring, and denoising show that the proposed RVRT achieves state-of-the-art performance on benchmark datasets with balanced model size, testing memory and runtime.",True,True,"Liang, Jingyun and Fan, Yuchen and Xiang, Xiaoyu and Ranjan, Rakesh and Ilg, Eddy and Green, Simon and Cao, Jiezhang and Zhang, Kai and Timofte, Radu and Gool, Luc V",2022.0,,,,Advances in Neural Information Processing Systems,Recurrent Video Restoration Transformer with Guided Deformable Attention,Recurrent Video Restoration Transformer with Guided Deformable Attention,http://arxiv.org/pdf/2206.02146v3,"Video restoration aims at restoring multiple high-quality frames from multiple low-quality frames. Existing video restoration methods generally fall into two extreme cases, i.e., they either restore all frames in parallel or restore the video frame by frame in a recurrent way, which would result in different merits and drawbacks. Typically, the former has the advantage of temporal information fusion. However, it suffers from large model size and intensive memory consumption; the latter has a relatively small model size as it shares parameters across frames; however, it lacks long-range dependency modeling ability and parallelizability. In this paper, we attempt to integrate the advantages of the two cases by proposing a recurrent video restoration transformer, namely RVRT. RVRT processes local neighboring frames in parallel within a globally recurrent framework which can achieve a good trade-off between model size, effectiveness, and efficiency. Specifically, RVRT divides the video into multiple clips and uses the previously inferred clip feature to estimate the subsequent clip feature. Within each clip, different frame features are jointly updated with implicit feature aggregation. Across different clips, the guided deformable attention is designed for clip-to-clip alignment, which predicts multiple relevant locations from the whole inferred clip and aggregates their features by the attention mechanism. Extensive experiments on video super-resolution, deblurring, and denoising show that the proposed RVRT achieves state-of-the-art performance on benchmark datasets with balanced model size, testing memory and runtime." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,wang2019edvr,\cite{wang2019edvr},EDVR: Video Restoration with Enhanced Deformable Convolutional Networks,http://arxiv.org/abs/1905.02716v1,"Video restoration tasks, including super-resolution, deblurring, etc, are drawing increasing attention in the computer vision community. A challenging benchmark named REDS is released in the NTIRE19 Challenge. This new benchmark challenges existing methods from two aspects: (1) how to align multiple frames given large motions, and (2) how to effectively fuse different frames with diverse motion and blur. In this work, we propose a novel Video Restoration framework with Enhanced Deformable networks, termed EDVR, to address these challenges. First, to handle large motions, we devise a Pyramid, Cascading and Deformable (PCD) alignment module, in which frame alignment is done at the feature level using deformable convolutions in a coarse-to-fine manner. Second, we propose a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration. Thanks to these modules, our EDVR wins the champions and outperforms the second place by a large margin in all four tracks in the NTIRE19 video restoration and enhancement challenges. EDVR also demonstrates superior performance to state-of-the-art published methods on video super-resolution and deblurring. The code is available at https://github.com/xinntao/EDVR.",True,True,"Wang, Xintao and Chan, Kelvin CK and Yu, Ke and Dong, Chao and Change Loy, Chen",2019.0,,,,,EDVR: Video Restoration with Enhanced Deformable Convolutional Networks,EDVR: Video Restoration with Enhanced Deformable Convolutional Networks,http://arxiv.org/pdf/1905.02716v1,"Video restoration tasks, including super-resolution, deblurring, etc, are drawing increasing attention in the computer vision community. A challenging benchmark named REDS is released in the NTIRE19 Challenge. This new benchmark challenges existing methods from two aspects: (1) how to align multiple frames given large motions, and (2) how to effectively fuse different frames with diverse motion and blur. In this work, we propose a novel Video Restoration framework with Enhanced Deformable networks, termed EDVR, to address these challenges. First, to handle large motions, we devise a Pyramid, Cascading and Deformable (PCD) alignment module, in which frame alignment is done at the feature level using deformable convolutions in a coarse-to-fine manner. Second, we propose a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration. Thanks to these modules, our EDVR wins the champions and outperforms the second place by a large margin in all four tracks in the NTIRE19 video restoration and enhancement challenges. EDVR also demonstrates superior performance to state-of-the-art published methods on video super-resolution and deblurring. The code is available at https://github.com/xinntao/EDVR." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,xue2019video,\cite{xue2019video},Video Enhancement with Task-Oriented Flow,http://arxiv.org/abs/1711.09078v3,"Many video enhancement algorithms rely on optical flow to register frames in a video sequence. Precise flow estimation is however intractable; and optical flow itself is often a sub-optimal representation for particular video processing tasks. In this paper, we propose task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner. We design a neural network with a trainable motion estimation component and a video processing component, and train them jointly to learn the task-oriented flow. For evaluation, we build Vimeo-90K, a large-scale, high-quality video dataset for low-level video processing. TOFlow outperforms traditional optical flow on standard benchmarks as well as our Vimeo-90K dataset in three video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution.",True,True,"Xue, Tianfan and Chen, Baian and Wu, Jiajun and Wei, Donglai and Freeman, William T",2019.0,,,,International Journal of Computer Vision,Video Enhancement with Task-Oriented Flow,Video Enhancement with Task-Oriented Flow,http://toflow.csail.mit.edu/,"In this paper, we propose task-oriented flow (TOFlow), a flow representation tailored for specific video processing tasks." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,liu2013bayesian,\cite{liu2013bayesian},On Bayesian adaptive video super resolution,,,True,False,"Liu, Ce and Sun, Deqing",2013.0,,,,IEEE transactions on pattern analysis and machine intelligence,On Bayesian adaptive video super resolution,[PDF] On Bayesian Adaptive Video Super Resolution - People,https://people.csail.mit.edu/celiu/pdfs/TPAMI13-VSR.pdf,"In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,nah2019ntire,\cite{nah2019ntire},Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study,,,True,False,"Nah, Seungjun and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Timofte, Radu and Mu Lee, Kyoung",2019.0,,,,,Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study,[PDF] NTIRE 2019 Challenge on Video Deblurring and Super-Resolution,https://openaccess.thecvf.com/content_CVPRW_2019/papers/NTIRE/Nah_NTIRE_2019_Challenge_on_Video_Deblurring_and_Super-Resolution_Dataset_and_CVPRW_2019_paper.pdf,"This paper introduces a novel large dataset for video de- blurring, video super-resolution and studies the state-of- the-art as emerged from the NTIRE 2019" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,yi2019progressive,\cite{yi2019progressive},Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations,,,True,False,"Yi, Peng and Wang, Zhongyuan and Jiang, Kui and Jiang, Junjun and Ma, Jiayi",2019.0,,,,,Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations,Progressive Fusion Video Super-Resolution Network via Exploiting ...,https://github.com/psychopa4/PFNL,"GitHub - psychopa4/PFNL: Progressive Fusion Video Super-Resolution Network via Exploiting Non-Local Spatio-Temporal Correlations * GitHub Copilot Write better code with AI * Why GitHub * The ReadME Project GitHub community articles * GitHub Advanced Security Enterprise-grade security features Search code, repositories, users, issues, pull requests... Reload to refresh your session.You signed out in another tab or window. Repository files navigation The datasets and checkpoint file are re-uploaded to TeraBox, eval,test,checkpoint. Note that the training dataset provides Ground Truth images and Bicubic downsampling LR images, while the evaluation dataset provides Gaussian blur and downsampling images. We provide Vid4 and UDM10 as testing datasets. This frame is from auditorium in UDM10 testing dataset. This frame is from photography in UDM10 testing dataset. PSNR/SSIM on UDM10 test dataset (4xSR)" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,li2020mucan,\cite{li2020mucan},"MuCAN: Multi-Correspondence Aggregation Network for Video Super-Resolution",http://arxiv.org/abs/2007.11803v1,"Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame. In this process, inter- and intra-frames are the key sources for exploiting temporal and spatial information. However, there are a couple of limitations for existing VSR methods. First, optical flow is often used to establish temporal correspondence. But flow estimation itself is error-prone and affects recovery results. Second, similar patterns existing in natural images are rarely exploited for the VSR task. Motivated by these findings, we propose a temporal multi-correspondence aggregation strategy to leverage similar patches across frames, and a cross-scale nonlocal-correspondence aggregation scheme to explore self-similarity of images across scales. Based on these two new modules, we build an effective multi-correspondence aggregation network (MuCAN) for VSR. Our method achieves state-of-the-art results on multiple benchmark datasets. Extensive experiments justify the effectiveness of our method.",True,True,"Li, Wenbo and Tao, Xin and Guo, Taian and Qi, Lu and Lu, Jiangbo and Jia, Jiaya",2020.0,,,,,"MuCAN: Multi-Correspondence Aggregation Network for Video Super-Resolution",Multi-Correspondence Aggregation Network for Video Super ... - arXiv,https://arxiv.org/abs/2007.11803,We build an effective multi-correspondence aggregation network (MuCAN) for VSR. Our method achieves state-of-the-art results on multiple benchmark datasets. "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,realvsr,\cite{realvsr},Real-world video super-resolution: A benchmark dataset and a decomposition based learning scheme,,,True,False,"Yang, Xi and Xiang, Wangmeng and Zeng, Hui and Zhang, Lei",2021.0,,,,,Real-world video super-resolution: A benchmark dataset and a decomposition based learning scheme,"IanYeung/RealVSR: Dataset and Code for ICCV 2021 paper ""Real ...",https://github.com/IanYeung/RealVSR,"Dataset and Code for ICCV 2021 paper ""Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme""" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,realbasicvsr,\cite{realbasicvsr},Investigating Tradeoffs in Real-World Video Super-Resolution,http://arxiv.org/abs/2111.12704v1,"The diversity and complexity of degradations in real-world video super-resolution (VSR) pose non-trivial challenges in inference and training. First, while long-term propagation leads to improved performance in cases of mild degradations, severe in-the-wild degradations could be exaggerated through propagation, impairing output quality. To balance the tradeoff between detail synthesis and artifact suppression, we found an image pre-cleaning stage indispensable to reduce noises and artifacts prior to propagation. Equipped with a carefully designed cleaning module, our RealBasicVSR outperforms existing methods in both quality and efficiency. Second, real-world VSR models are often trained with diverse degradations to improve generalizability, requiring increased batch size to produce a stable gradient. Inevitably, the increased computational burden results in various problems, including 1) speed-performance tradeoff and 2) batch-length tradeoff. To alleviate the first tradeoff, we propose a stochastic degradation scheme that reduces up to 40\% of training time without sacrificing performance. We then analyze different training settings and suggest that employing longer sequences rather than larger batches during training allows more effective uses of temporal information, leading to more stable performance during inference. To facilitate fair comparisons, we propose the new VideoLQ dataset, which contains a large variety of real-world low-quality video sequences containing rich textures and patterns. Our dataset can serve as a common ground for benchmarking. Code, models, and the dataset will be made publicly available.",True,True,"Chan, Kelvin CK and Zhou, Shangchen and Xu, Xiangyu and Loy, Chen Change",2022.0,,,,,Investigating Tradeoffs in Real-World Video Super-Resolution,[PDF] Investigating Tradeoffs in Real-World Video Super-Resolution,https://openaccess.thecvf.com/content/CVPR2022/papers/Chan_Investigating_Tradeoffs_in_Real-World_Video_Super-Resolution_CVPR_2022_paper.pdf,"Figure 1. Results on a Real-World Video. In this work, we investigate various tradeoffs caused by the complex and diverse degradations in real-world VSR." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,xie2023mitigating,\cite{xie2023mitigating},Mitigating Artifacts in Real-World Video Super-Resolution Models,http://arxiv.org/abs/2212.07339v1,"The recurrent structure is a prevalent framework for the task of video super-resolution, which models the temporal dependency between frames via hidden states. When applied to real-world scenarios with unknown and complex degradations, hidden states tend to contain unpleasant artifacts and propagate them to restored frames. In this circumstance, our analyses show that such artifacts can be largely alleviated when the hidden state is replaced with a cleaner counterpart. Based on the observations, we propose a Hidden State Attention (HSA) module to mitigate artifacts in real-world video super-resolution. Specifically, we first adopt various cheap filters to produce a hidden state pool. For example, Gaussian blur filters are for smoothing artifacts while sharpening filters are for enhancing details. To aggregate a new hidden state that contains fewer artifacts from the hidden state pool, we devise a Selective Cross Attention (SCA) module, in which the attention between input features and each hidden state is calculated. Equipped with HSA, our proposed method, namely FastRealVSR, is able to achieve 2x speedup while obtaining better performance than Real-BasicVSR. Codes will be available at https://github.com/TencentARC/FastRealVSR",True,True,"Xie, Liangbin and Wang, Xintao and Shi, Shuwei and Gu, Jinjin and Dong, Chao and Shan, Ying",2023.0,,,,,Mitigating Artifacts in Real-World Video Super-Resolution Models,[PDF] Mitigating Artifacts in Real-World Video Super-resolution Models,https://ojs.aaai.org/index.php/AAAI/article/view/25398/25170,"Artifacts in video super-resolution are mitigated by replacing hidden states with a cleaner one using a Hidden State Attention (HSA) module, which uses cheap" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,S4,\cite{S4},Efficiently modeling long sequences with structured state spaces,,,True,False,"Gu, Albert and Goel, Karan and R{\'e}, Christopher",2021.0,,,,arXiv preprint arXiv:2111.00396,Efficiently modeling long sequences with structured state spaces,Efficiently Modeling Long Sequences with Structured State Spaces,http://arxiv.org/pdf/2111.00396v3,"A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies. Although conventional models including RNNs, CNNs, and Transformers have specialized variants for capturing long dependencies, they still struggle to scale to very long sequences of $10000$ or more steps. A promising recent approach proposed modeling sequences by simulating the fundamental state space model (SSM) \( x'(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t) \), and showed that for appropriate choices of the state matrix \( A \), this system could handle long-range dependencies mathematically and empirically. However, this method has prohibitive computation and memory requirements, rendering it infeasible as a general sequence modeling solution. We propose the Structured State Space sequence model (S4) based on a new parameterization for the SSM, and show that it can be computed much more efficiently than prior approaches while preserving their theoretical strengths. Our technique involves conditioning \( A \) with a low-rank correction, allowing it to be diagonalized stably and reducing the SSM to the well-studied computation of a Cauchy kernel. S4 achieves strong empirical results across a diverse range of established benchmarks, including (i) 91\% accuracy on sequential CIFAR-10 with no data augmentation or auxiliary losses, on par with a larger 2-D ResNet, (ii) substantially closing the gap to Transformers on image and language modeling tasks, while performing generation $60\times$ faster (iii) SoTA on every task from the Long Range Arena benchmark, including solving the challenging Path-X task of length 16k that all prior work fails on, while being as efficient as all competitors." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,variant1,\cite{variant1},Long Movie Clip Classification with State-Space Video Models,http://arxiv.org/abs/2204.01692v3,"Most modern video recognition models are designed to operate on short video clips (e.g., 5-10s in length). Thus, it is challenging to apply such models to long movie understanding tasks, which typically require sophisticated long-range temporal reasoning. The recently introduced video transformers partially address this issue by using long-range temporal self-attention. However, due to the quadratic cost of self-attention, such models are often costly and impractical to use. Instead, we propose ViS4mer, an efficient long-range video model that combines the strengths of self-attention and the recently introduced structured state-space sequence (S4) layer. Our model uses a standard Transformer encoder for short-range spatiotemporal feature extraction, and a multi-scale temporal S4 decoder for subsequent long-range temporal reasoning. By progressively reducing the spatiotemporal feature resolution and channel dimension at each decoder layer, ViS4mer learns complex long-range spatiotemporal dependencies in a video. Furthermore, ViS4mer is $2.63\times$ faster and requires $8\times$ less GPU memory than the corresponding pure self-attention-based model. Additionally, ViS4mer achieves state-of-the-art results in $6$ out of $9$ long-form movie video classification tasks on the Long Video Understanding (LVU) benchmark. Furthermore, we show that our approach successfully generalizes to other domains, achieving competitive results on the Breakfast and the COIN procedural activity datasets. The code is publicly available at: https://github.com/md-mohaiminul/ViS4mer.",True,True,"Islam, Md Mohaiminul and Bertasius, Gedas",2022.0,,,,,Long Movie Clip Classification with State-Space Video Models,Long Movie Clip Classification with State-Space Video ...,https://arxiv.org/abs/2204.01692,"by MM Islam · 2022 · Cited by 137 — We propose ViS4mer, an efficient long-range video model that combines the strengths of self-attention and the recently introduced structured state-space" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,variant2,\cite{variant2},"S4ND: Modeling Images and Videos as Multidimensional Signals Using State Spaces",http://arxiv.org/abs/2210.06583v2,"Visual data such as images and videos are typically modeled as discretizations of inherently continuous, multidimensional signals. Existing continuous-signal models attempt to exploit this fact by modeling the underlying signals of visual (e.g., image) data directly. However, these models have not yet been able to achieve competitive performance on practical vision tasks such as large-scale image and video classification. Building on a recent line of work on deep state space models (SSMs), we propose S4ND, a new multidimensional SSM layer that extends the continuous-signal modeling ability of SSMs to multidimensional data including images and videos. We show that S4ND can model large-scale visual data in $1$D, $2$D, and $3$D as continuous multidimensional signals and demonstrates strong performance by simply swapping Conv2D and self-attention layers with S4ND layers in existing state-of-the-art models. On ImageNet-1k, S4ND exceeds the performance of a Vision Transformer baseline by $1.5\%$ when training with a $1$D sequence of patches, and matches ConvNeXt when modeling images in $2$D. For videos, S4ND improves on an inflated $3$D ConvNeXt in activity classification on HMDB-51 by $4\%$. S4ND implicitly learns global, continuous convolutional kernels that are resolution invariant by construction, providing an inductive bias that enables generalization across multiple resolutions. By developing a simple bandlimiting modification to S4 to overcome aliasing, S4ND achieves strong zero-shot (unseen at training time) resolution performance, outperforming a baseline Conv2D by $40\%$ on CIFAR-10 when trained on $8 \times 8$ and tested on $32 \times 32$ images. When trained with progressive resizing, S4ND comes within $\sim 1\%$ of a high-resolution model while training $22\%$ faster.",True,True,"Nguyen, Eric and Goel, Karan and Gu, Albert and Downs, Gordon and Shah, Preey and Dao, Tri and Baccus, Stephen and R{\'e}, Christopher",2022.0,,,,Advances in neural information processing systems,"S4ND: Modeling Images and Videos as Multidimensional Signals Using State Spaces",[PDF] S4ND: Modeling Images and Videos as Multidimensional Signals ...,https://proceedings.neurips.cc/paper_files/paper/2022/file/13388efc819c09564c66ab2dc8463809-Paper-Conference.pdf,"S4 investigated state space models, which are linear time-invariant systems that map signals u(t) 7! y(t) and can be represented either as a linear ODE (" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,variant3,\cite{variant3},Selective structured state-spaces for long-form video understanding,,,True,False,"Wang, Jue and Zhu, Wentao and Wang, Pichao and Yu, Xiang and Liu, Linda and Omar, Mohamed and Hamid, Raffay",2023.0,,,,,Selective structured state-spaces for long-form video understanding,[PDF] Selective Structured State-Spaces for Long-Form Video ...,https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_Selective_Structured_State-Spaces_for_Long-Form_Video_Understanding_CVPR_2023_paper.pdf,"We present extensive comparative results using three challenging long-form video understanding datasets. (LVU, COIN and Breakfast), demonstrating that our ap-." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,variant4,\cite{variant4},Diagonal State Spaces are as Effective as Structured State Spaces,http://arxiv.org/abs/2203.14343v3,"Modeling long range dependencies in sequential data is a fundamental step towards attaining human-level performance in many modalities such as text, vision, audio and video. While attention-based models are a popular and effective choice in modeling short-range interactions, their performance on tasks requiring long range reasoning has been largely inadequate. In an exciting result, Gu et al. (ICLR 2022) proposed the $\textit{Structured State Space}$ (S4) architecture delivering large gains over state-of-the-art models on several long-range tasks across various modalities. The core proposition of S4 is the parameterization of state matrices via a diagonal plus low rank structure, allowing efficient computation. In this work, we show that one can match the performance of S4 even without the low rank correction and thus assuming the state matrices to be diagonal. Our $\textit{Diagonal State Space}$ (DSS) model matches the performance of S4 on Long Range Arena tasks, speech classification on Speech Commands dataset, while being conceptually simpler and straightforward to implement.",True,True,"Gupta, Ankit and Gu, Albert and Berant, Jonathan",2022.0,,,,Advances in Neural Information Processing Systems,Diagonal State Spaces are as Effective as Structured State Spaces,Diagonal State Spaces are as Effective as Structured State Spaces,http://arxiv.org/pdf/2203.14343v3,"Modeling long range dependencies in sequential data is a fundamental step towards attaining human-level performance in many modalities such as text, vision, audio and video. While attention-based models are a popular and effective choice in modeling short-range interactions, their performance on tasks requiring long range reasoning has been largely inadequate. In an exciting result, Gu et al. (ICLR 2022) proposed the $\textit{Structured State Space}$ (S4) architecture delivering large gains over state-of-the-art models on several long-range tasks across various modalities. The core proposition of S4 is the parameterization of state matrices via a diagonal plus low rank structure, allowing efficient computation. In this work, we show that one can match the performance of S4 even without the low rank correction and thus assuming the state matrices to be diagonal. Our $\textit{Diagonal State Space}$ (DSS) model matches the performance of S4 on Long Range Arena tasks, speech classification on Speech Commands dataset, while being conceptually simpler and straightforward to implement." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,variant5,\cite{variant5},Simplified State Space Layers for Sequence Modeling,http://arxiv.org/abs/2208.04933v3,"Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult Path-X task.",True,True,"Smith, Jimmy TH and Warrington, Andrew and Linderman, Scott W",2022.0,,,,arXiv preprint arXiv:2208.04933,Simplified State Space Layers for Sequence Modeling,Simplified State Space Layers for Sequence Modeling,http://arxiv.org/pdf/2208.04933v3,"Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult Path-X task." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,mamba,\cite{mamba},Mamba: Linear-Time Sequence Modeling with Selective State Spaces,http://arxiv.org/abs/2312.00752v2,"Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.",True,True,"Gu, Albert and Dao, Tri",2023.0,,,,arXiv preprint arXiv:2312.00752,Mamba: Linear-Time Sequence Modeling with Selective State Spaces,Mamba: Linear-Time Sequence Modeling with Selective State Spaces,https://openreview.net/forum?id=tEYskw1VY2,"This paper proposes Mamba, a linear-time sequence model with an intra-layer combination of Selective S4D, Short Convolution and Gated Linear Unit. The paper" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,domain1,\cite{domain1},Mamba-nd: Selective state space modeling for multi-dimensional data,,,True,False,"Li, Shufan and Singh, Harkanwar and Grover, Aditya",2024.0,,,,arXiv preprint arXiv:2402.05892,Mamba-nd: Selective state space modeling for multi-dimensional data,Mamba-ND: Selective State Space Modeling for Multi-Dimensional ...,https://arxiv.org/abs/2402.05892,"In this work, we present Mamba-ND, a generalized design extending the Mamba architecture to arbitrary multi-dimensional data." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,domain2,\cite{domain2},PointMamba: A Simple State Space Model for Point Cloud Analysis,http://arxiv.org/abs/2402.10739v5,"Transformers have become one of the foundational architectures in point cloud analysis tasks due to their excellent global modeling ability. However, the attention mechanism has quadratic complexity, making the design of a linear complexity method with global modeling appealing. In this paper, we propose PointMamba, transferring the success of Mamba, a recent representative state space model (SSM), from NLP to point cloud analysis tasks. Unlike traditional Transformers, PointMamba employs a linear complexity algorithm, presenting global modeling capacity while significantly reducing computational costs. Specifically, our method leverages space-filling curves for effective point tokenization and adopts an extremely simple, non-hierarchical Mamba encoder as the backbone. Comprehensive evaluations demonstrate that PointMamba achieves superior performance across multiple datasets while significantly reducing GPU memory usage and FLOPs. This work underscores the potential of SSMs in 3D vision-related tasks and presents a simple yet effective Mamba-based baseline for future research. The code will be made available at \url{https://github.com/LMD0311/PointMamba}.",True,True,"Liang, Dingkang and Zhou, Xin and Wang, Xinyu and Zhu, Xingkui and Xu, Wei and Zou, Zhikang and Ye, Xiaoqing and Bai, Xiang",2024.0,,,,arXiv preprint arXiv:2402.10739,PointMamba: A Simple State Space Model for Point Cloud Analysis,PointMamba: A Simple State Space Model for Point Cloud Analysis,http://arxiv.org/pdf/2402.10739v5,"Transformers have become one of the foundational architectures in point cloud analysis tasks due to their excellent global modeling ability. However, the attention mechanism has quadratic complexity, making the design of a linear complexity method with global modeling appealing. In this paper, we propose PointMamba, transferring the success of Mamba, a recent representative state space model (SSM), from NLP to point cloud analysis tasks. Unlike traditional Transformers, PointMamba employs a linear complexity algorithm, presenting global modeling capacity while significantly reducing computational costs. Specifically, our method leverages space-filling curves for effective point tokenization and adopts an extremely simple, non-hierarchical Mamba encoder as the backbone. Comprehensive evaluations demonstrate that PointMamba achieves superior performance across multiple datasets while significantly reducing GPU memory usage and FLOPs. This work underscores the potential of SSMs in 3D vision-related tasks and presents a simple yet effective Mamba-based baseline for future research. The code will be made available at \url{https://github.com/LMD0311/PointMamba}." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,domain3,\cite{domain3},"Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model",http://arxiv.org/abs/2401.09417v3,"Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., the Mamba deep learning model, have shown great potential for long sequence modeling. Meanwhile building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8$\times$ faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on images with a resolution of 1248$\times$1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to be the next-generation backbone for vision foundation models. Code is available at https://github.com/hustvl/Vim.",True,True,"Zhu, Lianghui and Liao, Bencheng and Zhang, Qian and Wang, Xinlong and Liu, Wenyu and Wang, Xinggang",2024.0,,,,arXiv preprint arXiv:2401.09417,"Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model",Vision Mamba: Efficient Visual Representation Learning with ... - arXiv,https://arxiv.org/abs/2401.09417,"Title:Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model View a PDF of the paper titled Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, by Lianghui Zhu and 5 other authors In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. View a PDF of the paper titled Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, by Lianghui Zhu and 5 other authors - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle " "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task1,\cite{self-supervised_task1},"Unsupervised Learning of Visual Features by Contrasting Cluster Assignments",http://arxiv.org/abs/2006.09882v5,"Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a swapped prediction mechanism where we predict the cluster assignment of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.",True,True,"Caron, Mathilde and Misra, Ishan and Mairal, Julien and Goyal, Priya and Bojanowski, Piotr and Joulin, Armand",2020.0,,,,Advances in neural information processing systems,"Unsupervised Learning of Visual Features by Contrasting Cluster Assignments",Unsupervised Learning of Visual Features by Contrasting ...,https://arxiv.org/abs/2006.09882,"Authors:Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin View a PDF of the paper titled Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, by Mathilde Caron and 5 other authors Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. View a PDF of the paper titled Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, by Mathilde Caron and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle " "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task2,\cite{self-supervised_task2},Emerging Properties in Self-Supervised Vision Transformers,http://arxiv.org/abs/2104.14294v2,"In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.",True,True,"Caron, Mathilde and Touvron, Hugo and Misra, Ishan and J{\'e}gou, Herv{\'e} and Mairal, Julien and Bojanowski, Piotr and Joulin, Armand",2021.0,,,,,Emerging Properties in Self-Supervised Vision Transformers,[PDF] Emerging Properties in Self-Supervised Vision Transformers,https://openaccess.thecvf.com/content/ICCV2021/papers/Caron_Emerging_Properties_in_Self-Supervised_Vision_Transformers_ICCV_2021_paper.pdf,"Self-supervised ViT features contain semantic segmentation, scene layout, object boundaries, and perform well with k-NN classifiers, unlike supervised ViTs or" "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task3,\cite{self-supervised_task3},A Simple Framework for Contrastive Learning of Visual Representations,http://arxiv.org/abs/2002.05709v3,"This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.",True,True,"Chen, Ting and Kornblith, Simon and Norouzi, Mohammad and Hinton, Geoffrey",2020.0,,,,,A Simple Framework for Contrastive Learning of Visual Representations,A Simple Framework for Contrastive Learning of Visual Representations,http://arxiv.org/pdf/2002.05709v3,"This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task4,\cite{self-supervised_task4},"SSR: An Efficient and Robust Framework for Learning with Unknown Label Noise",http://arxiv.org/abs/2111.11288v2,"Despite the large progress in supervised learning with neural networks, there are significant challenges in obtaining high-quality, large-scale and accurately labelled datasets. In such a context, how to learn in the presence of noisy labels has received more and more attention. As a relatively complex problem, in order to achieve good results, current approaches often integrate components from several fields, such as supervised learning, semi-supervised learning, transfer learning and resulting in complicated methods. Furthermore, they often make multiple assumptions about the type of noise of the data. This affects the model robustness and limits its performance under different noise conditions. In this paper, we consider a novel problem setting, Learning with Unknown Label Noise}(LULN), that is, learning when both the degree and the type of noise are unknown. Under this setting, unlike previous methods that often introduce multiple assumptions and lead to complex solutions, we propose a simple, efficient and robust framework named Sample Selection and Relabelling(SSR), that with a minimal number of hyperparameters achieves SOTA results in various conditions. At the heart of our method is a sample selection and relabelling mechanism based on a non-parametric KNN classifier~(NPK) $g_q$ and a parametric model classifier~(PMC) $g_p$, respectively, to select the clean samples and gradually relabel the noisy samples. Without bells and whistles, such as model co-training, self-supervised pre-training and semi-supervised learning, and with robustness concerning the settings of its few hyper-parameters, our method significantly surpasses previous methods on both CIFAR10/CIFAR100 with synthetic noise and real-world noisy datasets such as WebVision, Clothing1M and ANIMAL-10N. Code is available at https://github.com/MrChenFeng/SSR_BMVC2022.",True,True,"Feng, Chen and Tzimiropoulos, Georgios and Patras, Ioannis",2021.0,,,,arXiv preprint arXiv:2111.11288,"SSR: An Efficient and Robust Framework for Learning with Unknown Label Noise",[PDF] SSR: An Efficient and Robust Framework for Learning with Unknown ...,https://bmvc2022.mpi-inf.mpg.de/0372.pdf,"In this paper, we consider a novel problem setting, Learning with Unknown. Label Noise (LULN), that is, learning when both the degree and the type of noise are." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task5,\cite{self-supervised_task5},Self-supervised representation learning with cross-context learning between global and hypercolumn features,,,True,False,"Gao, Zheng and Feng, Chen and Patras, Ioannis",2024.0,,,,,Self-supervised representation learning with cross-context learning between global and hypercolumn features,[PDF] Self-Supervised Representation Learning With Cross-Context ...,https://openaccess.thecvf.com/content/WACV2024/papers/Gao_Self-Supervised_Representation_Learning_With_Cross-Context_Learning_Between_Global_and_Hypercolumn_WACV_2024_paper.pdf,This leads to a novel self-supervised framework–cross-context learn- ing between global and hypercolumn features (CGH)–that learns representations by "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task6,\cite{self-supervised_task6},Momentum Contrast for Unsupervised Visual Representation Learning,http://arxiv.org/abs/1911.05722v3,"We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.",True,True,"He, Kaiming and Fan, Haoqi and Wu, Yuxin and Xie, Saining and Girshick, Ross",2020.0,,,,,Momentum Contrast for Unsupervised Visual Representation Learning,Momentum Contrast for Unsupervised Visual Representation Learning,http://arxiv.org/pdf/1911.05722v3,"We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task7,\cite{self-supervised_task7},Masked Autoencoders Are Scalable Vision Learners,http://arxiv.org/abs/2111.06377v3,"This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior.",True,True,"He, Kaiming and Chen, Xinlei and Xie, Saining and Li, Yanghao and Doll{\'a}r, Piotr and Girshick, Ross",2022.0,,,,,Masked Autoencoders Are Scalable Vision Learners,Masked Autoencoders Are Scalable Vision Learners,http://arxiv.org/pdf/2111.06377v3,"This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,self-supervised_task8,\cite{self-supervised_task8},SimMIM: A Simple Framework for Masked Image Modeling,http://arxiv.org/abs/2111.09886v2,"This paper presents SimMIM, a simple framework for masked image modeling. We simplify recently proposed related approaches without special designs such as block-wise masking and tokenization via discrete VAE or clustering. To study what let the masked image modeling task learn good representations, we systematically study the major components in our framework, and find that simple designs of each component have revealed very strong representation learning performance: 1) random masking of the input image with a moderately large masked patch size (e.g., 32) makes a strong pre-text task; 2) predicting raw pixels of RGB values by direct regression performs no worse than the patch classification approaches with complex designs; 3) the prediction head can be as light as a linear layer, with no worse performance than heavier ones. Using ViT-B, our approach achieves 83.8% top-1 fine-tuning accuracy on ImageNet-1K by pre-training also on this dataset, surpassing previous best approach by +0.6%. When applied on a larger model of about 650 million parameters, SwinV2-H, it achieves 87.1% top-1 accuracy on ImageNet-1K using only ImageNet-1K data. We also leverage this approach to facilitate the training of a 3B model (SwinV2-G), that by $40\times$ less data than that in previous practice, we achieve the state-of-the-art on four representative vision benchmarks. The code and models will be publicly available at https://github.com/microsoft/SimMIM.",True,True,"Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Bao, Jianmin and Yao, Zhuliang and Dai, Qi and Hu, Han",2022.0,,,,,SimMIM: A Simple Framework for Masked Image Modeling,SimMIM: A Simple Framework for Masked Image Modeling,http://arxiv.org/pdf/2111.09886v2,"This paper presents SimMIM, a simple framework for masked image modeling. We simplify recently proposed related approaches without special designs such as block-wise masking and tokenization via discrete VAE or clustering. To study what let the masked image modeling task learn good representations, we systematically study the major components in our framework, and find that simple designs of each component have revealed very strong representation learning performance: 1) random masking of the input image with a moderately large masked patch size (e.g., 32) makes a strong pre-text task; 2) predicting raw pixels of RGB values by direct regression performs no worse than the patch classification approaches with complex designs; 3) the prediction head can be as light as a linear layer, with no worse performance than heavier ones. Using ViT-B, our approach achieves 83.8% top-1 fine-tuning accuracy on ImageNet-1K by pre-training also on this dataset, surpassing previous best approach by +0.6%. When applied on a larger model of about 650 million parameters, SwinV2-H, it achieves 87.1% top-1 accuracy on ImageNet-1K using only ImageNet-1K data. We also leverage this approach to facilitate the training of a 3B model (SwinV2-G), that by $40\times$ less data than that in previous practice, we achieve the state-of-the-art on four representative vision benchmarks. The code and models will be publicly available at https://github.com/microsoft/SimMIM." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,cl1,\cite{cl1},Masked Siamese Networks for Label-Efficient Learning,http://arxiv.org/abs/2204.07141v1,"We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. Our code is publicly available.",True,True,"Assran, Mahmoud and Caron, Mathilde and Misra, Ishan and Bojanowski, Piotr and Bordes, Florian and Vincent, Pascal and Joulin, Armand and Rabbat, Mike and Ballas, Nicolas",2022.0,,,,,Masked Siamese Networks for Label-Efficient Learning,Masked Siamese Networks for Label-Efficient Learning,https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136910442.pdf,"by M Assran · Cited by 421 — We propose Masked Siamese Networks (MSNs), a self-supervised learning frame- work that leverages the idea of mask-denoising while avoiding pixel and token-level." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,cl2,\cite{cl2},Adaptive Soft Contrastive Learning,http://arxiv.org/abs/2207.11163v1,"Self-supervised learning has recently achieved great success in representation learning without human annotations. The dominant method -- that is contrastive learning, is generally based on instance discrimination tasks, i.e., individual samples are treated as independent categories. However, presuming all the samples are different contradicts the natural grouping of similar samples in common visual datasets, e.g., multiple views of the same dog. To bridge the gap, this paper proposes an adaptive method that introduces soft inter-sample relations, namely Adaptive Soft Contrastive Learning (ASCL). More specifically, ASCL transforms the original instance discrimination task into a multi-instance soft discrimination task, and adaptively introduces inter-sample relations. As an effective and concise plug-in module for existing self-supervised learning frameworks, ASCL achieves the best performance on several benchmarks in terms of both performance and efficiency. Code is available at https://github.com/MrChenFeng/ASCL_ICPR2022.",True,True,"Feng, Chen and Patras, Ioannis",2022.0,,,,,Adaptive Soft Contrastive Learning,Adaptive Soft Contrastive Learning,http://arxiv.org/pdf/2207.11163v1,"Self-supervised learning has recently achieved great success in representation learning without human annotations. The dominant method -- that is contrastive learning, is generally based on instance discrimination tasks, i.e., individual samples are treated as independent categories. However, presuming all the samples are different contradicts the natural grouping of similar samples in common visual datasets, e.g., multiple views of the same dog. To bridge the gap, this paper proposes an adaptive method that introduces soft inter-sample relations, namely Adaptive Soft Contrastive Learning (ASCL). More specifically, ASCL transforms the original instance discrimination task into a multi-instance soft discrimination task, and adaptively introduces inter-sample relations. As an effective and concise plug-in module for existing self-supervised learning frameworks, ASCL achieves the best performance on several benchmarks in terms of both performance and efficiency. Code is available at https://github.com/MrChenFeng/ASCL_ICPR2022." "Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution",2506.01037v1,cl3,\cite{cl3},MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset,http://arxiv.org/abs/2303.12756v1,"Deep learning has achieved great success in recent years with the aid of advanced neural network structures and large-scale human-annotated datasets. However, it is often costly and difficult to accurately and efficiently annotate large-scale datasets, especially for some specialized domains where fine-grained labels are required. In this setting, coarse labels are much easier to acquire as they do not require expert knowledge. In this work, we propose a contrastive learning method, called $\textbf{Mask}$ed $\textbf{Con}$trastive learning~($\textbf{MaskCon}$) to address the under-explored problem setting, where we learn with a coarse-labelled dataset in order to address a finer labelling problem. More specifically, within the contrastive learning framework, for each sample our method generates soft-labels with the aid of coarse labels against other samples and another augmented view of the sample in question. By contrast to self-supervised contrastive learning where only the sample's augmentations are considered hard positives, and in supervised contrastive learning where only samples with the same coarse labels are considered hard positives, we propose soft labels based on sample distances, that are masked by the coarse labels. This allows us to utilize both inter-sample relations and coarse labels. We demonstrate that our method can obtain as special cases many existing state-of-the-art works and that it provides tighter bounds on the generalization error. Experimentally, our method achieves significant improvement over the current state-of-the-art in various datasets, including CIFAR10, CIFAR100, ImageNet-1K, Standford Online Products and Stanford Cars196 datasets. Code and annotations are available at https://github.com/MrChenFeng/MaskCon_CVPR2023.",True,True,"Feng, Chen and Patras, Ioannis",2023.0,,,,,MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset,Masked Contrastive Learning for Coarse-Labelled Dataset,https://ieeexplore.ieee.org/iel7/10203037/10203050/10203131.pdf,"by C Feng · 2023 · Cited by 17 — MaskCon is a contrastive learning method for coarse-labeled datasets, generating soft labels based on sample distances to learn fine-grained representations." "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,ronneberger_unet_miccai_2015,\cite{ronneberger_unet_miccai_2015},U-net: Convolutional networks for biomedical image segmentation,,,True,False,"Ronneberger, Olaf and Fischer, Philipp and Brox, Thomas",2015.0,,,,,U-net: Convolutional networks for biomedical image segmentation,U-Net: Convolutional Networks for Biomedical Image Segmentation,http://arxiv.org/pdf/1505.04597v1,"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ." "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,menze_tmi_2015,\cite{menze_tmi_2015},The {Multimodal} {Brain} {Tumor} {Image} {Segmentation} {Benchmark} ({BRATS}),,,True,False,"Menze, Bjoern H and Jakab, Andras and Bauer, Stefan and Kalpathy-Cramer, Jayashree and Farahani, Keyvan and Kirby, Justin and Burren, Yuliya and Porz, Nicole and Slotboom, Johannes and Wiest, Roland and others",2015.0,,,,IEEE TMI,The {Multimodal} {Brain} {Tumor} {Image} {Segmentation} {Benchmark} ({BRATS}),The Multimodal Brain Tumor Image Segmentation Benchmark ...,https://pmc.ncbi.nlm.nih.gov/articles/PMC4833122/,The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) - PMC The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) Find articles by Thomas J Taylor Find articles by Nicholas J Tustison [DOI00671-8)] [PMC free article] [PubMed] [Google Scholar00671-8&)] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI1522-2594(200004)43:4%3C589::aid-mrm14%3E3.0.co;2-2)] [PubMed] [Google Scholar%20and%20B(0)%20variations%20in%20quantitative%20T2%20measurements%20using%20MRI&author=J%20Sled&author=G%20Pike&volume=43&issue=4&publication_year=2000&pages=589-593&pmid=10748435&doi=10.1002/(sici)1522-2594(200004)43:4%3C589::aid-mrm14%3E3.0.co;2-2&)] "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,bakas_arxiv_2019,\cite{bakas_arxiv_2019},"Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge",http://arxiv.org/abs/1811.02629v3,"Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.",True,True,"Bakas, Spyridon and Reyes, Mauricio and Jakab, Andras and Bauer, Stefan and Rempfler, Markus and Crimi, Alessandro and Shinohara, Russell Takeshi and Berger, Christoph and Ha, Sung Min and Rozycki, Martin and others",2018.0,,,,arXiv preprint arXiv:1811.02629,"Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge",Identifying the Best Machine Learning Algorithms for Brain Tumor ...,https://arxiv.org/abs/1811.02629,"View a PDF of the paper titled Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge, by Spyridon Bakas and 426 other authors View a PDF of the paper titled Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge, by Spyridon Bakas and 426 other authors" "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,baid_arxiv_2021,\cite{baid_arxiv_2021},"The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification",http://arxiv.org/abs/2107.02314v2,"The BraTS 2021 challenge celebrates its 10th anniversary and is jointly organized by the Radiological Society of North America (RSNA), the American Society of Neuroradiology (ASNR), and the Medical Image Computing and Computer Assisted Interventions (MICCAI) society. Since its inception, BraTS has been focusing on being a common benchmarking venue for brain glioma segmentation algorithms, with well-curated multi-institutional multi-parametric magnetic resonance imaging (mpMRI) data. Gliomas are the most common primary malignancies of the central nervous system, with varying degrees of aggressiveness and prognosis. The RSNA-ASNR-MICCAI BraTS 2021 challenge targets the evaluation of computational algorithms assessing the same tumor compartmentalization, as well as the underlying tumor's molecular characterization, in pre-operative baseline mpMRI data from 2,040 patients. Specifically, the two tasks that BraTS 2021 focuses on are: a) the segmentation of the histologically distinct brain tumor sub-regions, and b) the classification of the tumor's O[6]-methylguanine-DNA methyltransferase (MGMT) promoter methylation status. The performance evaluation of all participating algorithms in BraTS 2021 will be conducted through the Sage Bionetworks Synapse platform (Task 1) and Kaggle (Task 2), concluding in distributing to the top ranked participants monetary awards of $60,000 collectively.",True,True,"Baid, Ujjwal and Ghodasara, Satyam and Mohan, Suyash and Bilello, Michel and Calabrese, Evan and Colak, Errol and Farahani, Keyvan and Kalpathy-Cramer, Jayashree and Kitamura, Felipe C and Pati, Sarthak and others",2021.0,,,,arXiv preprint arXiv:2107.02314,"The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification",BraTS-Lighthouse 2025 Challenge - syn64153130 - Wiki,https://www.synapse.org/Synapse:syn64153130/wiki/631064,"[1] U.Baid, et al., The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification, arXiv:2107.02314, 2021." "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,myronenko_miccai_2019,\cite{myronenko_miccai_2019},3D MRI brain tumor segmentation using autoencoder regularization,http://arxiv.org/abs/1810.11654v3,"Automated segmentation of brain tumors from 3D magnetic resonance images (MRIs) is necessary for the diagnosis, monitoring, and treatment planning of the disease. Manual delineation practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human error. Here, we describe a semantic segmentation network for tumor subregion segmentation from 3D MRIs based on encoder-decoder architecture. Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers. The current approach won 1st place in the BraTS 2018 challenge.",True,True,"Myronenko, Andriy",2019.0,,,,,3D MRI brain tumor segmentation using autoencoder regularization,3D MRI brain tumor segmentation using autoencoder regularization,http://arxiv.org/pdf/1810.11654v3,"Automated segmentation of brain tumors from 3D magnetic resonance images (MRIs) is necessary for the diagnosis, monitoring, and treatment planning of the disease. Manual delineation practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human error. Here, we describe a semantic segmentation network for tumor subregion segmentation from 3D MRIs based on encoder-decoder architecture. Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers. The current approach won 1st place in the BraTS 2018 challenge." "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,jiang_cascaded_unet_miccai_2020,\cite{jiang_cascaded_unet_miccai_2020},Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task,,,True,False,"Jiang, Zeyu and Ding, Changxing and Liu, Minfeng and Tao, Dacheng",2020.0,,,,,Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task,Two-Stage Cascaded U-Net: 1st Place Solution to BraTS Challenge ...,https://www.semanticscholar.org/paper/Two-Stage-Cascaded-U-Net%3A-1st-Place-Solution-to-Jiang-Ding/6eead90d63cc679263ef608121db075b78e03960,A novel two-stage cascaded U-Net to segment the substructures of brain tumors from coarse to fine is devised and won the 1st place in the BraTS 2019 "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,isensee_nnunet_miccai_2021,\cite{isensee_nnunet_miccai_2021},nnU-Net for Brain Tumor Segmentation,http://arxiv.org/abs/2011.00848v1,"We apply nnU-Net to the segmentation task of the BraTS 2020 challenge. The unmodified nnU-Net baseline configuration already achieves a respectable result. By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnUNet pipeline we are able to improve its segmentation performance substantially. We furthermore re-implement the BraTS ranking scheme to determine which of our nnU-Net variants best fits the requirements imposed by it. Our final ensemble took the first place in the BraTS 2020 competition with Dice scores of 88.95, 85.06 and 82.03 and HD95 values of 8.498,17.337 and 17.805 for whole tumor, tumor core and enhancing tumor, respectively.",True,True,"Isensee, Fabian and J{\""a}ger, Paul F and Full, Peter M and Vollmuth, Philipp and Maier-Hein, Klaus H",2021.0,,,,,nnU-Net for Brain Tumor Segmentation,Brain tumor segmentation with advanced nnU-Net - ScienceDirect.com,https://www.sciencedirect.com/science/article/pii/S2772528624000013,"This paper introduces an extended version of the nnU-Net architecture for brain tumor segmentation, addressing both adult (Glioma) and pediatric tumors." "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,luu_miccai_2022,\cite{luu_miccai_2022},Extending nn-UNet for brain tumor segmentation,http://arxiv.org/abs/2112.04653v1,"Brain tumor segmentation is essential for the diagnosis and prognosis of patients with gliomas. The brain tumor segmentation challenge has continued to provide a great source of data to develop automatic algorithms to perform the task. This paper describes our contribution to the 2021 competition. We developed our methods based on nn-UNet, the winning entry of last year competition. We experimented with several modifications, including using a larger network, replacing batch normalization with group normalization, and utilizing axial attention in the decoder. Internal 5-fold cross validation as well as online evaluation from the organizers showed the effectiveness of our approach, with minor improvement in quantitative metrics when compared to the baseline. The proposed models won first place in the final ranking on unseen test data. The codes, pretrained weights, and docker image for the winning submission are publicly available at https://github.com/rixez/Brats21_KAIST_MRI_Lab",True,True,"Luu, Huan Minh and Park, Sung-Hong",2021.0,,,,,Extending nn-UNet for brain tumor segmentation,Extending nn-UNet for Brain Tumor Segmentation,https://link.springer.com/chapter/10.1007/978-3-031-09002-8_16,"by HM Luu · 2021 · Cited by 185 — We extended the nn-UNet framework by using a larger network, replacing batch normalization with group normalization, and using axial attention" "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,zeineldin_miccai_2022,\cite{zeineldin_miccai_2022},"Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS 2022 Challenge Solution",http://arxiv.org/abs/2212.09310v1,"Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively. The docker image for the winning submission is publicly available at (https://hub.docker.com/r/razeineldin/camed22).",True,True,"Zeineldin, Ramy A and Karar, Mohamed E and Burgert, Oliver and Mathis-Ullrich, Franziska",2022.0,,,,arXiv preprint arXiv:2212.09310,"Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS 2022 Challenge Solution",Multimodal CNN Networks for Brain Tumor Segmentation in MRI,https://link.springer.com/chapter/10.1007/978-3-031-33842-7_11,"The BraTS challenge is designed to encourage research in the field of medical image segmentation, with a focus on segmenting brain tumors in MRI" "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,isensee_nnunet_nature_2021,\cite{isensee_nnunet_nature_2021},nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,,,True,False,"Isensee, Fabian and Jaeger, Paul F and Kohl, Simon AA and Petersen, Jens and Maier-Hein, Klaus H",2021.0,,,,Nature methods,nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,nnU-Net: a self-configuring method for deep learning-based ... - Nature,https://www.nature.com/articles/s41592-020-01008-z,"# nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. ### Variability and reproducibility in deep learning for medical image segmentation U-net: convolutional networks for biomedical image segmentation. V-net: fully convolutional neural networks for volumetric medical image segmentation. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. F.I. and P.F.J. conceptualized the method and planned the experiments with the help of S.A.A.K., J.P. and K.H.M.-H. P.F.J., S.A.A.K. and K.H.M.-H. P.F.J., F.I. and K.H.M.-H. wrote the paper with contributions from J.P. and S.A.A.K. K.H.M.-H. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation." "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,wang_transbts_miccai_2021,\cite{wang_transbts_miccai_2021},TransBTS: Multimodal Brain Tumor Segmentation Using Transformer,http://arxiv.org/abs/2103.04430v2,"Transformer, which can benefit from global (long-range) information modeling using self-attention mechanisms, has been successful in natural language processing and 2D image classification recently. However, both local and global features are crucial for dense prediction tasks, especially for 3D medical image segmentation. In this paper, we for the first time exploit Transformer in 3D CNN for MRI Brain Tumor Segmentation and propose a novel network named TransBTS based on the encoder-decoder structure. To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric spatial feature maps. Meanwhile, the feature maps are reformed elaborately for tokens that are fed into Transformer for global feature modeling. The decoder leverages the features embedded by Transformer and performs progressive upsampling to predict the detailed segmentation map. Extensive experimental results on both BraTS 2019 and 2020 datasets show that TransBTS achieves comparable or higher results than previous state-of-the-art 3D methods for brain tumor segmentation on 3D MRI scans. The source code is available at https://github.com/Wenxuan-1119/TransBTS",True,True,"Wang, Wenxuan and Chen, Chen and Ding, Meng and Yu, Hong and Zha, Sen and Li, Jiangyun",2021.0,,,,,TransBTS: Multimodal Brain Tumor Segmentation Using Transformer,TransBTS: Multimodal Brain Tumor Segmentation Using Transformer,http://arxiv.org/pdf/2103.04430v2,"Transformer, which can benefit from global (long-range) information modeling using self-attention mechanisms, has been successful in natural language processing and 2D image classification recently. However, both local and global features are crucial for dense prediction tasks, especially for 3D medical image segmentation. In this paper, we for the first time exploit Transformer in 3D CNN for MRI Brain Tumor Segmentation and propose a novel network named TransBTS based on the encoder-decoder structure. To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric spatial feature maps. Meanwhile, the feature maps are reformed elaborately for tokens that are fed into Transformer for global feature modeling. The decoder leverages the features embedded by Transformer and performs progressive upsampling to predict the detailed segmentation map. Extensive experimental results on both BraTS 2019 and 2020 datasets show that TransBTS achieves comparable or higher results than previous state-of-the-art 3D methods for brain tumor segmentation on 3D MRI scans. The source code is available at https://github.com/Wenxuan-1119/TransBTS" "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,swinunetr,\cite{swinunetr},"Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images",http://arxiv.org/abs/2201.01266v1,"Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can assist clinicians in diagnosing the patient and successively studying the progression of the malignant entity. In recent years, Fully Convolutional Neural Networks (FCNNs) approaches have become the de facto standard for 3D medical image segmentation. The popular ""U-shaped"" network architecture has achieved state-of-the-art performance benchmarks on different 2D and 3D semantic segmentation tasks and across various imaging modalities. However, due to the limited kernel size of convolution layers in FCNNs, their performance of modeling long-range information is sub-optimal, and this can lead to deficiencies in the segmentation of tumors with variable sizes. On the other hand, transformer models have demonstrated excellent capabilities in capturing such long-range information in multiple domains, including natural language processing and computer vision. Inspired by the success of vision transformers and their variants, we propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is reformulated as a sequence to sequence prediction problem wherein multi-modal input data is projected into a 1D sequence of embedding and used as an input to a hierarchical Swin transformer as the encoder. The swin transformer encoder extracts features at five different resolutions by utilizing shifted windows for computing self-attention and is connected to an FCNN-based decoder at each resolution via skip connections. We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase. Code: https://monai.io/research/swin-unetr",True,True,"Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and Roth, Holger R and Xu, Daguang",2021.0,,,,,"Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images",Swin Transformers for Semantic Segmentation of Brain Tumors in ...,https://arxiv.org/abs/2201.01266,"We propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is" "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,chen_med3d_arxiv_2019,\cite{chen_med3d_arxiv_2019},Med3D: Transfer Learning for 3D Medical Image Analysis,http://arxiv.org/abs/1904.00625v4,"The performance on deep learning is significantly affected by volume of training data. Models pre-trained from massive dataset such as ImageNet become a powerful weapon for speeding up training convergence and improving accuracy. Similarly, models based on large dataset are important for the development of deep learning in 3D medical images. However, it is extremely challenging to build a sufficiently large dataset due to difficulty of data acquisition and annotation in 3D medical imaging. We aggregate the dataset from several medical challenges to build 3DSeg-8 dataset with diverse modalities, target organs, and pathologies. To extract general medical three-dimension (3D) features, we design a heterogeneous 3D network called Med3D to co-train multi-domain 3DSeg-8 so as to make a series of pre-trained models. We transfer Med3D pre-trained models to lung segmentation in LIDC dataset, pulmonary nodule classification in LIDC dataset and liver segmentation on LiTS challenge. Experiments show that the Med3D can accelerate the training convergence speed of target 3D medical tasks 2 times compared with model pre-trained on Kinetics dataset, and 10 times compared with training from scratch as well as improve accuracy ranging from 3% to 20%. Transferring our Med3D model on state-the-of-art DenseASPP segmentation network, in case of single model, we achieve 94.6\% Dice coefficient which approaches the result of top-ranged algorithms on the LiTS challenge.",True,True,"Chen, Sihong and Ma, Kai and Zheng, Yefeng",2019.0,,,,arXiv preprint arXiv:1904.00625,Med3D: Transfer Learning for 3D Medical Image Analysis,Med3D: Transfer Learning for 3D Medical Image Analysis,http://arxiv.org/pdf/1904.00625v4,"The performance on deep learning is significantly affected by volume of training data. Models pre-trained from massive dataset such as ImageNet become a powerful weapon for speeding up training convergence and improving accuracy. Similarly, models based on large dataset are important for the development of deep learning in 3D medical images. However, it is extremely challenging to build a sufficiently large dataset due to difficulty of data acquisition and annotation in 3D medical imaging. We aggregate the dataset from several medical challenges to build 3DSeg-8 dataset with diverse modalities, target organs, and pathologies. To extract general medical three-dimension (3D) features, we design a heterogeneous 3D network called Med3D to co-train multi-domain 3DSeg-8 so as to make a series of pre-trained models. We transfer Med3D pre-trained models to lung segmentation in LIDC dataset, pulmonary nodule classification in LIDC dataset and liver segmentation on LiTS challenge. Experiments show that the Med3D can accelerate the training convergence speed of target 3D medical tasks 2 times compared with model pre-trained on Kinetics dataset, and 10 times compared with training from scratch as well as improve accuracy ranging from 3% to 20%. Transferring our Med3D model on state-the-of-art DenseASPP segmentation network, in case of single model, we achieve 94.6\% Dice coefficient which approaches the result of top-ranged algorithms on the LiTS challenge." "Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding",2506.00434v1,zhu_modelgenesis_mia_2021,\cite{zhu_modelgenesis_mia_2021},Models Genesis,http://arxiv.org/abs/2004.07882v4,"Transfer learning from natural images to medical images has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.",True,True,"Zhou, Zongwei and Sodha, Vatsal and Pang, Jiaxuan and Gotway, Michael B and Liang, Jianming",2021.0,,,,Medical image analysis,Models Genesis,Models Genesis,http://arxiv.org/pdf/2004.07882v4,"Transfer learning from natural images to medical images has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis." Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,zhu2023survey,\cite{zhu2023survey},"A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future",http://arxiv.org/abs/2307.09220v2,"As the most fundamental scene understanding tasks, object detection and segmentation have made tremendous progress in deep learning era. Due to the expensive manual labeling cost, the annotated categories in existing datasets are often small-scale and pre-defined, i.e., state-of-the-art fully-supervised detectors and segmentors fail to generalize beyond the closed vocabulary. To resolve this limitation, in the last few years, the community has witnessed an increasing attention toward Open-Vocabulary Detection (OVD) and Segmentation (OVS). By ``open-vocabulary'', we mean that the models can classify objects beyond pre-defined categories. In this survey, we provide a comprehensive review on recent developments of OVD and OVS. A taxonomy is first developed to organize different tasks and methodologies. We find that the permission and usage of weak supervision signals can well discriminate different methodologies, including: visual-semantic space mapping, novel visual feature synthesis, region-aware training, pseudo-labeling, knowledge distillation, and transfer learning. The proposed taxonomy is universal across different tasks, covering object detection, semantic/instance/panoptic segmentation, 3D and video understanding. The main design principles, key challenges, development routes, methodology strengths, and weaknesses are thoroughly analyzed. In addition, we benchmark each task along with the vital components of each method in appendix and updated online at https://github.com/seanzhuh/awesome-open-vocabulary-detection-and-segmentation. Finally, several promising directions are provided and discussed to stimulate future research.",True,True,"Zhu, Chaoyang and Chen, Long",2023.0,,,,,"A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future",Awesome OVD-OVS - A Survey on Open-Vocabulary ...,https://github.com/seanzhuh/Awesome-Open-Vocabulary-Detection-and-Segmentation,"Awesome OVD-OVS - A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future" Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,radford2021learning,\cite{radford2021learning},Learning Transferable Visual Models From Natural Language Supervision,http://arxiv.org/abs/2103.00020v1,"State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.",True,True,"Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and Krueger, Gretchen and Sutskever, Ilya",2021.0,,,,,Learning Transferable Visual Models From Natural Language Supervision,Learning Transferable Visual Models From Natural Language Supervision,http://arxiv.org/pdf/2103.00020v1,"State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP." Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,lin2014microsoft,\cite{lin2014microsoft},Microsoft COCO: Common Objects in Context,http://arxiv.org/abs/1405.0312v3,"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",True,True,"Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C. Lawrence",2014.0,,,,,Microsoft COCO: Common Objects in Context,Microsoft COCO: Common Objects in Context,http://arxiv.org/pdf/1405.0312v3,"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model." Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,gupta2019lvis,\cite{gupta2019lvis},LVIS: A Dataset for Large Vocabulary Instance Segmentation,http://arxiv.org/abs/1908.03195v2,"Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at http://www.lvisdataset.org.",True,True,"Gupta, Agrim and Dollar, Piotr and Girshick, Ross",2019.0,,,,,LVIS: A Dataset for Large Vocabulary Instance Segmentation,LVIS: A Dataset for Large Vocabulary Instance Segmentation,http://arxiv.org/pdf/1908.03195v2,"Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at http://www.lvisdataset.org." Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,deng2009imagenet,\cite{deng2009imagenet},{ImageNet: a Large-Scale Hierarchical Image Database},,,True,False,"Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li",2009.0,,,,,{ImageNet: a Large-Scale Hierarchical Image Database},(PDF) ImageNet: a Large-Scale Hierarchical Image Database,https://www.researchgate.net/publication/221361415_ImageNet_a_Large-Scale_Hierarchical_Image_Database,This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,zhou2022detecting,\cite{zhou2022detecting},Detecting Twenty-thousand Classes using Image-level Supervision,http://arxiv.org/abs/2201.02605v3,"Current object detectors are limited in vocabulary size due to the small scale of detection datasets. Image classifiers, on the other hand, reason about much larger vocabularies, as their datasets are larger and easier to collect. We propose Detic, which simply trains the classifiers of a detector on image classification data and thus expands the vocabulary of detectors to tens of thousands of concepts. Unlike prior work, Detic does not need complex assignment schemes to assign image labels to boxes based on model predictions, making it much easier to implement and compatible with a range of detection architectures and backbones. Our results show that Detic yields excellent detectors even for classes without box annotations. It outperforms prior work on both open-vocabulary and long-tail detection benchmarks. Detic provides a gain of 2.4 mAP for all classes and 8.3 mAP for novel classes on the open-vocabulary LVIS benchmark. On the standard LVIS benchmark, Detic obtains 41.7 mAP when evaluated on all classes, or only rare classes, hence closing the gap in performance for object categories with few samples. For the first time, we train a detector with all the twenty-one-thousand classes of the ImageNet dataset and show that it generalizes to new datasets without finetuning. Code is available at \url{https://github.com/facebookresearch/Detic}.",True,True,"Zhou, Xingyi and Girdhar, Rohit and Joulin, Armand and Kr{\""a}henb{\""u}hl, Philipp and Misra, Ishan",2022.0,,,,,Detecting Twenty-thousand Classes using Image-level Supervision,[PDF] Detecting Twenty-thousand Classes using Image-level Supervision,https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136690344.pdf,"We propose Detic, which simply trains the classifiers of a detector on image classification data and thus expands the vocabulary of detectors to tens of" Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,zhong2022regionclip,\cite{zhong2022regionclip},RegionCLIP: Region-based Language-Image Pretraining,http://arxiv.org/abs/2112.09106v1,"Contrastive language-image pretraining (CLIP) using image-text pairs has achieved impressive results on image classification in both zero-shot and transfer learning settings. However, we show that directly applying such models to recognize image regions for object detection leads to poor performance due to a domain shift: CLIP was trained to match an image as a whole to a text description, without capturing the fine-grained alignment between image regions and text spans. To mitigate this issue, we propose a new method called RegionCLIP that significantly extends CLIP to learn region-level visual representations, thus enabling fine-grained alignment between image regions and textual concepts. Our method leverages a CLIP model to match image regions with template captions and then pretrains our model to align these region-text pairs in the feature space. When transferring our pretrained model to the open-vocabulary object detection tasks, our method significantly outperforms the state of the art by 3.8 AP50 and 2.2 AP for novel categories on COCO and LVIS datasets, respectively. Moreoever, the learned region representations support zero-shot inference for object detection, showing promising results on both COCO and LVIS datasets. Our code is available at https://github.com/microsoft/RegionCLIP.",True,True,"Zhong, Yiwu and Yang, Jianwei and Zhang, Pengchuan and Li, Chunyuan and Codella, Noel and Li, Liunian Harold and Zhou, Luowei and Dai, Xiyang and Yuan, Lu and Li, Yin and Gao, Jianfeng",2022.0,,,,,RegionCLIP: Region-based Language-Image Pretraining,RegionCLIP: Region-based Language-Image Pretraining - arXiv,https://arxiv.org/abs/2112.09106,"We propose a new method called RegionCLIP that significantly extends CLIP to learn region-level visual representations, thus enabling fine-grained alignment." Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,ma2024codet,\cite{ma2024codet},"CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection",http://arxiv.org/abs/2310.16667v1,"Deriving reliable region-word alignment from image-text pairs is critical to learn object-level vision-language representations for open-vocabulary object detection. Existing methods typically rely on pre-trained or self-trained vision-language models for alignment, which are prone to limitations in localization accuracy or generalization capabilities. In this paper, we propose CoDet, a novel approach that overcomes the reliance on pre-aligned vision-language space by reformulating region-word alignment as a co-occurring object discovery problem. Intuitively, by grouping images that mention a shared concept in their captions, objects corresponding to the shared concept shall exhibit high co-occurrence among the group. CoDet then leverages visual similarities to discover the co-occurring objects and align them with the shared concept. Extensive experiments demonstrate that CoDet has superior performances and compelling scalability in open-vocabulary detection, e.g., by scaling up the visual backbone, CoDet achieves 37.0 $\text{AP}^m_{novel}$ and 44.7 $\text{AP}^m_{all}$ on OV-LVIS, surpassing the previous SoTA by 4.2 $\text{AP}^m_{novel}$ and 9.8 $\text{AP}^m_{all}$. Code is available at https://github.com/CVMI-Lab/CoDet.",True,True,"Ma, Chuofan and Jiang, Yi and Wen, Xin and Yuan, Zehuan and Qi, Xiaojuan",2023.0,,,,,"CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection",(NeurIPS2023) CoDet: Co-Occurrence Guided Region ...,https://github.com/CVMI-Lab/CoDet,Train an open-vocabulary detector with web-scale image-text pairs; Align regions and words by co-occurrence instead of region-text similarity Test-time Vocabulary Adaptation for Language-driven Object Detection,2506.00333v1,liu2024shine,\cite{liu2024shine},SHiNe: Semantic Hierarchy Nexus for Open-vocabulary Object Detection,http://arxiv.org/abs/2405.10053v1,"Open-vocabulary object detection (OvOD) has transformed detection into a language-guided task, empowering users to freely define their class vocabularies of interest during inference. However, our initial investigation indicates that existing OvOD detectors exhibit significant variability when dealing with vocabularies across various semantic granularities, posing a concern for real-world deployment. To this end, we introduce Semantic Hierarchy Nexus (SHiNe), a novel classifier that uses semantic knowledge from class hierarchies. It runs offline in three steps: i) it retrieves relevant super-/sub-categories from a hierarchy for each target class; ii) it integrates these categories into hierarchy-aware sentences; iii) it fuses these sentence embeddings to generate the nexus classifier vector. Our evaluation on various detection benchmarks demonstrates that SHiNe enhances robustness across diverse vocabulary granularities, achieving up to +31.9% mAP50 with ground truth hierarchies, while retaining improvements using hierarchies generated by large language models. Moreover, when applied to open-vocabulary classification on ImageNet-1k, SHiNe improves the CLIP zero-shot baseline by +2.8% accuracy. SHiNe is training-free and can be seamlessly integrated with any off-the-shelf OvOD detector, without incurring additional computational overhead during inference. The code is open source.",True,True,"Liu, Mingxuan and Hayes, Tyler L. and Ricci, Elisa and Csurka, Gabriela and Volpi, Riccardo",2024.0,,,,,SHiNe: Semantic Hierarchy Nexus for Open-vocabulary Object Detection,[PDF] Semantic Hierarchy Nexus for Open-vocabulary Object Detection,https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_SHiNe_Semantic_Hierarchy_Nexus_for_Open-vocabulary_Object_Detection_CVPR_2024_paper.pdf,"SHiNe is training-free and can be seamlessly integrated with any off-the-shelf OvOD detector, without incurring additional computational overhead dur- ing" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_2,\cite{ssl_2},Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,,,True,False,"Lee, Dong-Hyun",2013.0,,,,,Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,Pseudo-Label : The Simple and Efficient Semi-Supervised ...,https://www.researchgate.net/publication/280581078_Pseudo-Label_The_Simple_and_Efficient_Semi-Supervised_Learning_Method_for_Deep_Neural_Networks,"We propose the simple and efficient method of semi-supervised learning for deep neural networks. Basically, the proposed network is trained in a supervised" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_9,\cite{ssl_9},Semi-supervised Learning by Entropy Minimization,,,True,False,"Yves Grandvalet and Yoshua Bengio",2004.0,,,,,Semi-supervised Learning by Entropy Minimization,Semi-supervised Learning by Entropy Minimization - NIPS,https://papers.nips.cc/paper/2740-semi-supervised-learning-by-entropy-minimization,"We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. In the terminology used here, semi-supervised learning refers to learning a decision rule on X from labeled and unlabeled data. In the probabilistic framework, semi-supervised learning can be modeled as a missing data problem, which can be addressed by generative models such as mixture models thanks to the EM algorithm and extensions thereof .Generative models apply to the joint den- sity of patterns and class (X, Y ). Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_10,\cite{ssl_10},"Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning",http://arxiv.org/abs/2001.06001v2,"In this paper we revisit the idea of pseudo-labeling in the context of semi-supervised learning where a learning algorithm has access to a small set of labeled samples and a large set of unlabeled samples. Pseudo-labeling works by applying pseudo-labels to samples in the unlabeled set by using a model trained on the combination of the labeled samples and any previously pseudo-labeled samples, and iteratively repeating this process in a self-training cycle. Current methods seem to have abandoned this approach in favor of consistency regularization methods that train models under a combination of different styles of self-supervised losses on the unlabeled samples and standard supervised losses on the labeled samples. We empirically demonstrate that pseudo-labeling can in fact be competitive with the state-of-the-art, while being more resilient to out-of-distribution samples in the unlabeled set. We identify two key factors that allow pseudo-labeling to achieve such remarkable results (1) applying curriculum learning principles and (2) avoiding concept drift by restarting model parameters before each self-training cycle. We obtain 94.91% accuracy on CIFAR-10 using only 4,000 labeled samples, and 68.87% top-1 accuracy on Imagenet-ILSVRC using only 10% of the labeled samples. The code is available at https://github.com/uvavision/Curriculum-Labeling",True,True,"Paola Cascante{-}Bonilla and Fuwen Tan and Yanjun Qi and Vicente Ordonez",2021.0,,,,,"Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning",Revisiting Pseudo-Labeling for Semi-Supervised Learning,https://arxiv.org/abs/2001.06001,by P Cascante-Bonilla · 2020 · Cited by 409 — In this paper we revisit the idea of pseudo-labeling in the context of semi-supervised learning where a learning algorithm has access to a small set of labeled "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_11,\cite{ssl_11},"Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results",http://arxiv.org/abs/1703.01780v6,"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.",True,True,"Antti Tarvainen and Harri Valpola",2017.0,,,,,"Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results",[PDF] Weight-averaged consistency targets improve semi-supervised ...,https://arxiv.org/pdf/1703.01780,"Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_12,\cite{ssl_12},"Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning",http://arxiv.org/abs/1606.04586v1,"Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.",True,True,"Mehdi Sajjadi and Mehran Javanmardi and Tolga Tasdizen",2016.0,,,,,"Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning",Regularization With Stochastic Transformations and Perturbations ...,https://arxiv.org/abs/1606.04586,Abstract page for arXiv paper 1606.04586: Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning. "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_13,\cite{ssl_13},Temporal Ensembling for Semi-Supervised Learning,http://arxiv.org/abs/1610.02242v3,"In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.",True,True,"Samuli Laine and Timo Aila",2017.0,,,,,Temporal Ensembling for Semi-Supervised Learning,"Review — Π-Model, Temporal Ensembling ... - Sik-Ho Tsang",https://sh-tsang.medium.com/review-%CF%80-model-temporal-ensembling-temporal-ensembling-for-semi-supervised-learning-9cb6eea6865e,"Temporal Ensembling for Semi-Supervised Learning. Stochastic Augmentation, Network Dropout, & Momentum Encoder are Used." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_14,\cite{ssl_14},Unsupervised Data Augmentation for Consistency Training,http://arxiv.org/abs/1904.12848v6,"Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda.",True,True,"Qizhe Xie and Zihang Dai and Eduard H. Hovy and Thang Luong and Quoc Le",2020.0,,,,,Unsupervised Data Augmentation for Consistency Training,Unsupervised Data Augmentation for Consistency Training,http://arxiv.org/pdf/1904.12848v6,"Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,tnnls_2,\cite{tnnls_2},"MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization",http://arxiv.org/abs/2203.14316v2,"The core issue in semi-supervised learning (SSL) lies in how to effectively leverage unlabeled data, whereas most existing methods tend to put a great emphasis on the utilization of high-confidence samples yet seldom fully explore the usage of low-confidence samples. In this paper, we aim to utilize low-confidence samples in a novel way with our proposed mutex-based consistency regularization, namely MutexMatch. Specifically, the high-confidence samples are required to exactly predict ""what it is"" by conventional True-Positive Classifier, while the low-confidence samples are employed to achieve a simpler goal -- to predict with ease ""what it is not"" by True-Negative Classifier. In this sense, we not only mitigate the pseudo-labeling errors but also make full use of the low-confidence unlabeled data by consistency of dissimilarity degree. MutexMatch achieves superior performance on multiple benchmark datasets, i.e., CIFAR-10, CIFAR-100, SVHN, STL-10, mini-ImageNet and Tiny-ImageNet. More importantly, our method further shows superiority when the amount of labeled data is scarce, e.g., 92.23% accuracy with only 20 labeled data on CIFAR-10. Our code and model weights have been released at https://github.com/NJUyued/MutexMatch4SSL.",True,True,"Yue Duan and Zhen Zhao and Lei Qi and Lei Wang and Luping Zhou and Yinghuan Shi and Yang Gao",2024.0,,,,{IEEE} Trans. on Neural Networks and Learning Systems,"MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization",MutexMatch: Semi-Supervised Learning with Mutex-Based ... - arXiv,https://arxiv.org/abs/2203.14316,"In this paper, we aim to utilize low-confidence samples in a novel way with our proposed mutex-based consistency regularization, namely MutexMatch." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_3,\cite{ssl_3},MixMatch: A Holistic Approach to Semi-Supervised Learning,http://arxiv.org/abs/1905.02249v2,"Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success.",True,True,"David Berthelot and Nicholas Carlini and Ian J. Goodfellow and Nicolas Papernot and Avital Oliver and Colin Raffel",2019.0,,,,,MixMatch: A Holistic Approach to Semi-Supervised Learning,MixMatch: a holistic approach to semi-supervised learning,https://dl.acm.org/doi/10.5555/3454287.3454741,"A new algorithm, MixMatch, that guesses low-entropy labels for data-augmented un-labeled examples and mixes labeled and unlabeled data using MixUp." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_4,\cite{ssl_4},"FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence",http://arxiv.org/abs/2001.07685v2,"Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at https://github.com/google-research/fixmatch.",True,True,"Kihyuk Sohn and David Berthelot and Nicholas Carlini and Zizhao Zhang and Han Zhang and Colin Raffel and Ekin Dogus Cubuk and Alexey Kurakin and Chun{-}Liang Li",2020.0,,,,,"FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence",FixMatch: simplifying semi-supervised learning with consistency and ...,https://dl.acm.org/doi/abs/10.5555/3495724.3495775,"In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_16,\cite{ssl_16},"ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring",http://arxiv.org/abs/1911.09785v2,"We improve the recently-proposed ""MixMatch"" semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels. Augmentation anchoring feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between $5\times$ and $16\times$ less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach $93.73\%$ accuracy (compared to MixMatch's accuracy of $93.58\%$ with $4{,}000$ examples) and a median accuracy of $84.92\%$ with just four labels per class. We make our code and data open-source at https://github.com/google-research/remixmatch.",True,True,David Berthelot and Nicholas Carlini and Ekin D. Cubuk and Alex Kurakin and Kihyuk Sohn and Han Zhang and Colin Raffel,2020.0,,,,,"ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring",ReMixMatch: Semi-Supervised Learning with Distribution Alignment ...,https://arxiv.org/abs/1911.09785,"We improve the recently-proposed ""MixMatch"" semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_19,\cite{ssl_19},"FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling",http://arxiv.org/abs/2110.08263v3,"The recently proposed FixMatch achieved state-of-the-art results on most semi-supervised learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes. To address this issue, we propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status. The core of CPL is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo labels. CPL does not introduce additional parameters or computations (forward or backward propagation). We apply CPL to FixMatch and call our improved algorithm FlexMatch. FlexMatch achieves state-of-the-art performance on a variety of SSL benchmarks, with especially strong performances when the labeled data are extremely limited or when the task is challenging. For example, FlexMatch achieves 13.96% and 18.96% error rate reduction over FixMatch on CIFAR-100 and STL-10 datasets respectively, when there are only 4 labels per class. CPL also significantly boosts the convergence speed, e.g., FlexMatch can use only 1/5 training time of FixMatch to achieve even better performance. Furthermore, we show that CPL can be easily adapted to other SSL algorithms and remarkably improve their performances. We open-source our code at https://github.com/TorchSSL/TorchSSL.",True,True,"Zhang, Bowen and Wang, Yidong and Hou, Wenxin and Wu, Hao and Wang, Jindong and Okumura, Manabu and Shinozaki, Takahiro",2021.0,,,,,"FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling",Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling,https://arxiv.org/abs/2110.08263,"We propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_20,\cite{ssl_20},FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning,http://arxiv.org/abs/2205.07246v3,"Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization. However, we argue that existing methods might fail to utilize the unlabeled data more effectively since they either use a pre-defined / fixed threshold or an ad-hoc threshold adjusting scheme, resulting in inferior performance and slow convergence. We first analyze a motivating example to obtain intuitions on the relationship between the desirable threshold and model's learning status. Based on the analysis, we hence propose FreeMatch to adjust the confidence threshold in a self-adaptive manner according to the model's learning status. We further introduce a self-adaptive class fairness regularization penalty to encourage the model for diverse predictions during the early training stage. Extensive experiments indicate the superiority of FreeMatch especially when the labeled data are extremely rare. FreeMatch achieves 5.78%, 13.59%, and 1.28% error rate reduction over the latest state-of-the-art method FlexMatch on CIFAR-10 with 1 label per class, STL-10 with 4 labels per class, and ImageNet with 100 labels per class, respectively. Moreover, FreeMatch can also boost the performance of imbalanced SSL. The codes can be found at https://github.com/microsoft/Semi-supervised-learning.",True,True,"Yidong Wang and Hao Chen and Qiang Heng and Wenxin Hou and Yue Fan and Zhen Wu and Jindong Wang and Marios Savvides and Takahiro Shinozaki and Bhiksha Raj and Bernt Schiele and Xing Xie",2023.0,,,,,FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning,FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning,https://openreview.net/forum?id=PDrUPTXJI_A,We propose FreeMatch to define and adjust the confidence threshold in a self-adaptive manner for semi-supervised learning. "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_8,\cite{ssl_8},"SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning",http://arxiv.org/abs/2301.10921v2,"The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance. In this paper, we first revisit the popular pseudo-labeling methods via a unified sample weighting formulation and demonstrate the inherent quantity-quality trade-off problem of pseudo-labeling with thresholding, which may prohibit learning. To this end, we propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training, effectively exploiting the unlabeled data. We derive a truncated Gaussian function to weight samples based on their confidence, which can be viewed as a soft version of the confidence threshold. We further enhance the utilization of weakly-learned classes by proposing a uniform alignment approach. In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.",True,True,Hao Chen and Ran Tao and Yue Fan and Yidong Wang and Jindong Wang and Bernt Schiele and Xing Xie and Bhiksha Raj and Marios Savvides,2023.0,,,,,"SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning",Addressing the Quantity-Quality Tradeoff in Semi-supervised Learning,https://openreview.net/forum?id=ymt1zQXBDiF,"This paper proposes SoftMatch to improve both the quantity and quality of pseudo-labels in semi-supervised learning. Basically, the authors" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_6,\cite{ssl_6},SimMatch: Semi-supervised Learning with Similarity Matching,http://arxiv.org/abs/2203.06915v2,"Learning with few labeled data has been a longstanding problem in the computer vision and machine learning research community. In this paper, we introduced a new semi-supervised learning framework, SimMatch, which simultaneously considers semantic similarity and instance similarity. In SimMatch, the consistency regularization will be applied on both semantic-level and instance-level. The different augmented views of the same instance are encouraged to have the same class prediction and similar similarity relationship respected to other instances. Next, we instantiated a labeled memory buffer to fully leverage the ground truth labels on instance-level and bridge the gaps between the semantic and instance similarities. Finally, we proposed the \textit{unfolding} and \textit{aggregation} operation which allows these two similarities be isomorphically transformed with each other. In this way, the semantic and instance pseudo-labels can be mutually propagated to generate more high-quality and reliable matching targets. Extensive experimental results demonstrate that SimMatch improves the performance of semi-supervised learning tasks across different benchmark datasets and different settings. Notably, with 400 epochs of training, SimMatch achieves 67.2\%, and 74.4\% Top-1 Accuracy with 1\% and 10\% labeled examples on ImageNet, which significantly outperforms the baseline methods and is better than previous semi-supervised learning frameworks. Code and pre-trained models are available at https://github.com/KyleZheng1997/simmatch.",True,True,"Mingkai Zheng and Shan You and Lang Huang and Fei Wang and Chen Qian and Chang Xu",2022.0,,,,,SimMatch: Semi-supervised Learning with Similarity Matching,SimMatch: Semi-supervised Learning with Similarity ...,https://arxiv.org/abs/2203.06915,"by M Zheng · 2022 · Cited by 309 — In this paper, we introduced a new semi-supervised learning framework, SimMatch, which simultaneously considers semantic similarity and instance similarity." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_7,\cite{ssl_7},SimMatchV2: Semi-Supervised Learning with Graph Consistency,http://arxiv.org/abs/2308.06692v1,"Semi-Supervised image classification is one of the most fundamental problem in computer vision, which significantly reduces the need for human labor. In this paper, we introduce a new semi-supervised learning algorithm - SimMatchV2, which formulates various consistency regularizations between labeled and unlabeled data from the graph perspective. In SimMatchV2, we regard the augmented view of a sample as a node, which consists of a label and its corresponding representation. Different nodes are connected with the edges, which are measured by the similarity of the node representations. Inspired by the message passing and node classification in graph theory, we propose four types of consistencies, namely 1) node-node consistency, 2) node-edge consistency, 3) edge-edge consistency, and 4) edge-node consistency. We also uncover that a simple feature normalization can reduce the gaps of the feature norm between different augmented views, significantly improving the performance of SimMatchV2. Our SimMatchV2 has been validated on multiple semi-supervised learning benchmarks. Notably, with ResNet-50 as our backbone and 300 epochs of training, SimMatchV2 achieves 71.9\% and 76.2\% Top-1 Accuracy with 1\% and 10\% labeled examples on ImageNet, which significantly outperforms the previous methods and achieves state-of-the-art performance. Code and pre-trained models are available at \href{https://github.com/mingkai-zheng/SimMatchV2}{https://github.com/mingkai-zheng/SimMatchV2}.",True,True,"Mingkai Zheng and Shan You and Lang Huang and Chen Luo and Fei Wang and Chen Qian and Chang Xu",2023.0,,,,,SimMatchV2: Semi-Supervised Learning with Graph Consistency,Semi-Supervised Learning with Graph Consistency,https://arxiv.org/abs/2308.06692,"by M Zheng · 2023 · Cited by 17 — In this paper, we introduce a new semi-supervised learning algorithm - SimMatchV2, which formulates various consistency regularizations between labeled and" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_17,\cite{ssl_17},Label Propagation for Deep Semi-supervised Learning,http://arxiv.org/abs/1904.04717v1,"Semi-supervised learning is becoming increasingly important because it can combine data carefully labeled by humans with abundant unlabeled data to train deep neural networks. Classic methods on semi-supervised learning that have focused on transductive learning have not been fully exploited in the inductive framework followed by modern deep learning. The same holds for the manifold assumption---that similar examples should get the same prediction. In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network. At the core of the transductive method lies a nearest neighbor graph of the dataset that we create based on the embeddings of the same network.Therefore our learning process iterates between these two steps. We improve performance on several datasets especially in the few labels regime and show that our work is complementary to current state of the art.",True,True,"Ahmet Iscen and Giorgos Tolias and Yannis Avrithis and Ondrej Chum",2019.0,,,,,Label Propagation for Deep Semi-supervised Learning,[PDF] Label Propagation for Deep Semi-Supervised Learning,https://openaccess.thecvf.com/content_CVPR_2019/papers/Iscen_Label_Propagation_for_Deep_Semi-Supervised_Learning_CVPR_2019_paper.pdf,"Label propagation uses a transductive method to generate pseudo-labels for unlabeled data, using a graph based on network embeddings, to train a deep neural" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,tnnls_3,\cite{tnnls_3},Graph-Based Semi-Supervised Learning: {A} Comprehensive Review,,,True,False,"Zixing Song and Xiangli Yang and Zenglin Xu and Irwin King",2023.0,,,,{IEEE} Trans. on Neural Networks and Learning Systems,Graph-Based Semi-Supervised Learning: {A} Comprehensive Review,Graph-Based Semi-Supervised Learning,https://ieeexplore.ieee.org/document/9737635,"Graph-Based Semi-Supervised Learning: A Comprehensive Review | IEEE Journals & Magazine | IEEE Xplore Publisher: IEEE An essential class of SSL methods, referred to as graph-based semi-supervised learning (GSSL) methods in the literature, is to first represent each sample as a node in an affinity graph, and then, the label information of unlabeled samples can be inferred based on the structure of the constructed graph. Publisher: IEEE A similarity graph is constructed based on the given data, including both the labeled and unlabeled samples. Image 4: Contact IEEE to Subscribe About IEEE _Xplore_ | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | IEEE Privacy Policy" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_5,\cite{ssl_5},CoMatch: Semi-supervised Learning with Contrastive Graph Regularization,http://arxiv.org/abs/2011.11183v2,"Semi-supervised learning has been an effective paradigm for leveraging unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a new semi-supervised learning method that unifies dominant approaches and addresses their limitations. CoMatch jointly learns two representations of the training data, their class probabilities and low-dimensional embeddings. The two representations interact with each other to jointly evolve. The embeddings impose a smoothness constraint on the class probabilities to improve the pseudo-labels, whereas the pseudo-labels regularize the structure of the embeddings through graph-based contrastive learning. CoMatch achieves state-of-the-art performance on multiple datasets. It achieves substantial accuracy improvements on the label-scarce CIFAR-10 and STL-10. On ImageNet with 1% labels, CoMatch achieves a top-1 accuracy of 66.0%, outperforming FixMatch by 12.6%. Furthermore, CoMatch achieves better representation learning performance on downstream tasks, outperforming both supervised learning and self-supervised learning. Code and pre-trained models are available at https://github.com/salesforce/CoMatch.",True,True,"Junnan Li and Caiming Xiong and Steven C. H. Hoi",2021.0,,,,,CoMatch: Semi-supervised Learning with Contrastive Graph Regularization,CoMatch: Semi-Supervised Learning With Contrastive ...,https://openaccess.thecvf.com/content/ICCV2021/papers/Li_CoMatch_Semi-Supervised_Learning_With_Contrastive_Graph_Regularization_ICCV_2021_paper.pdf,"by J Li · 2021 · Cited by 384 — We propose CoMatch, a new semi-supervised learning method that unifies dominant approaches and addresses their limitations. CoMatch jointly learns two" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,rep_3,\cite{rep_3},Big Self-Supervised Models are Strong Semi-Supervised Learners,http://arxiv.org/abs/2006.10029v2,"One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels ($\le$13 labeled images per class) using ResNet-50, a $10\times$ improvement in label efficiency over the previous state-of-the-art. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels.",True,True,"Ting Chen and Simon Kornblith and Kevin Swersky and Mohammad Norouzi and Geoffrey E. Hinton",2020.0,,,,,Big Self-Supervised Models are Strong Semi-Supervised Learners,[2006.10029] Big Self-Supervised Models are Strong Semi ...,https://arxiv.org/abs/2006.10029,by T Chen · 2020 · Cited by 2883 — We show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks. "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ssl_1,\cite{ssl_1},Realistic Evaluation of Deep Semi-Supervised Learning Algorithms,http://arxiv.org/abs/1804.09170v4,"Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available.",True,True,"Avital Oliver and Augustus Odena and Colin Raffel and Ekin Dogus Cubuk and Ian J. Goodfellow",2018.0,,,,,Realistic Evaluation of Deep Semi-Supervised Learning Algorithms,Realistic Evaluation of Deep Semi-Supervised Learning Algorithms,https://arxiv.org/abs/1804.09170,Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_2,\cite{ossl_2},Semi-Supervised Learning under Class Distribution Mismatch,,,True,False,"Yanbei Chen and Xiatian Zhu and Wei Li and Shaogang Gong",2020.0,,,,,Semi-Supervised Learning under Class Distribution Mismatch,[PDF] Semi-Supervised Learning under Class Distribution Mismatch,https://ojs.aaai.org/index.php/AAAI/article/view/5763/5619,"Class distribution mismatch in semi-supervised learning occurs when labeled and unlabeled data come from different class distributions, unlike conventional SSL." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_14,\cite{ossl_14},SCOMatch: Alleviating Overtrusting in Open-set Semi-supervised Learning,http://arxiv.org/abs/2409.17512v1,"Open-set semi-supervised learning (OSSL) leverages practical open-set unlabeled data, comprising both in-distribution (ID) samples from seen classes and out-of-distribution (OOD) samples from unseen classes, for semi-supervised learning (SSL). Prior OSSL methods initially learned the decision boundary between ID and OOD with labeled ID data, subsequently employing self-training to refine this boundary. These methods, however, suffer from the tendency to overtrust the labeled ID data: the scarcity of labeled data caused the distribution bias between the labeled samples and the entire ID data, which misleads the decision boundary to overfit. The subsequent self-training process, based on the overfitted result, fails to rectify this problem. In this paper, we address the overtrusting issue by treating OOD samples as an additional class, forming a new SSL process. Specifically, we propose SCOMatch, a novel OSSL method that 1) selects reliable OOD samples as new labeled data with an OOD memory queue and a corresponding update strategy and 2) integrates the new SSL process into the original task through our Simultaneous Close-set and Open-set self-training. SCOMatch refines the decision boundary of ID and OOD classes across the entire dataset, thereby leading to improved results. Extensive experimental results show that SCOMatch significantly outperforms the state-of-the-art methods on various benchmarks. The effectiveness is further verified through ablation studies and visualization.",True,True,"Wang, Zerun and Xiang, Liuyu and Huang, Lang and Mao, Jiafeng and Xiao, Ling and Yamasaki, Toshihiko",2025.0,,,,,SCOMatch: Alleviating Overtrusting in Open-set Semi-supervised Learning,Alleviating Overtrusting in Open-set Semi-supervised Learning - arXiv,https://arxiv.org/abs/2409.17512,"We propose SCOMatch, a novel OSSL method that 1) selects reliable OOD samples as new labeled data with an OOD memory queue and a corresponding update strategy." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_12,\cite{ossl_12},Rethinking safe semi-supervised learning: Transferring the open-set problem to a close-set one,,,True,False,"Ma, Qiankun and Gao, Jiyao and Zhan, Bo and Guo, Yunpeng and Zhou, Jiliu and Wang, Yan",2023.0,,,,,Rethinking safe semi-supervised learning: Transferring the open-set problem to a close-set one,[PDF] Rethinking Safe Semi-supervised Learning - CVF Open Access,https://openaccess.thecvf.com/content/ICCV2023/supplemental/Ma_Rethinking_Safe_Semi-supervised_ICCV_2023_supplemental.pdf,Page 1. Rethinking Safe Semi-supervised Learning: Transferring the Open-set Problem to A Close-set One. -Supplementary Material-. 1. Detailed Datasets. In this "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_16,\cite{ossl_16},"Semi-Supervised Learning via Weight-aware Distillation under Class Distribution Mismatch",http://arxiv.org/abs/2308.11874v1,"Semi-Supervised Learning (SSL) under class distribution mismatch aims to tackle a challenging problem wherein unlabeled data contain lots of unknown categories unseen in the labeled ones. In such mismatch scenarios, traditional SSL suffers severe performance damage due to the harmful invasion of the instances with unknown categories into the target classifier. In this study, by strict mathematical reasoning, we reveal that the SSL error under class distribution mismatch is composed of pseudo-labeling error and invasion error, both of which jointly bound the SSL population risk. To alleviate the SSL error, we propose a robust SSL framework called Weight-Aware Distillation (WAD) that, by weights, selectively transfers knowledge beneficial to the target task from unsupervised contrastive representation to the target classifier. Specifically, WAD captures adaptive weights and high-quality pseudo labels to target instances by exploring point mutual information (PMI) in representation space to maximize the role of unlabeled data and filter unknown categories. Theoretically, we prove that WAD has a tight upper bound of population risk under class distribution mismatch. Experimentally, extensive results demonstrate that WAD outperforms five state-of-the-art SSL approaches and one standard baseline on two benchmark datasets, CIFAR10 and CIFAR100, and an artificial cross-dataset. The code is available at https://github.com/RUC-DWBI-ML/research/tree/main/WAD-master.",True,True,"Du, Pan and Zhao, Suyun and Sheng, Zisen and Li, Cuiping and Chen, Hong",2023.0,,,,,"Semi-Supervised Learning via Weight-aware Distillation under Class Distribution Mismatch",Semi-Supervised Learning via Weight-Aware Distillation ...,https://openaccess.thecvf.com/content/ICCV2023/papers/Du_Semi-Supervised_Learning_via_Weight-Aware_Distillation_under_Class_Distribution_Mismatch_ICCV_2023_paper.pdf,by P Du · 2023 · Cited by 11 — Semi-Supervised Learning (SSL) under class distribu- tion mismatch aims to tackle a challenging problem wherein unlabeled data contain lots of unknown "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_5,\cite{ossl_5},"Safe-Student for Safe Deep Semi-Supervised Learning with Unseen-Class Unlabeled Data",,,True,False,"Rundong He and Zhongyi Han and Xiankai Lu and Yilong Yin",2022.0,,,,,"Safe-Student for Safe Deep Semi-Supervised Learning with Unseen-Class Unlabeled Data",SAFER-STUDENT for Safe Deep Semi-Supervised Learning With...,https://openreview.net/forum?id=j8i42Lrh0Z,Missing: 04/08/2025 "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_6,\cite{ossl_6},"{SAFER-STUDENT} for Safe Deep Semi-Supervised Learning With Unseen-Class Unlabeled Data",,,True,False,"Rundong He and Zhongyi Han and Xiankai Lu and Yilong Yin",2024.0,,,,{IEEE} Trans. on Knowledge and Data Engineering,"{SAFER-STUDENT} for Safe Deep Semi-Supervised Learning With Unseen-Class Unlabeled Data",SAFER-STUDENT for Safe Deep Semi-Supervised Learning With ...,https://www.researchgate.net/publication/371000311_SAFER-STUDENT_for_Safe_Deep_Semi-Supervised_Learning_With_Unseen-Class_Unlabeled_Data,"Deep semi-supervised learning (SSL) methods aim to utilize abundant unlabeled data to improve the seen-class classification. Several similar definitions have emerged to describe this scenario, including safe SSL [9], open-set SSL [22,24,31,45], and the challenge of managing UnLabeled data from Unseen Classes in Semi-Supervised Learning (ULUC-SSL) [14]. In particular, we note that existing open-set SSL methods rely on prediction discrepancies between inliers and outliers from a single model trained on labeled data. To effectively alleviate the SVA data labeling cost, we propose an approach SURF, which makes full use of a limited amount of labeled SVA data combined with a large amount of unlabeled SVA data to train the SVA model via semi-supervised learning." "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_3,\cite{ossl_3},Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning,http://arxiv.org/abs/2007.11330v1,"Semi-supervised learning (SSL) has been proposed to leverage unlabeled data for training powerful models when only limited labeled data is available. While existing SSL methods assume that samples in the labeled and unlabeled data share the classes of their samples, we address a more complex novel scenario named open-set SSL, where out-of-distribution (OOD) samples are contained in unlabeled data. Instead of training an OOD detector and SSL separately, we propose a multi-task curriculum learning framework. First, to detect the OOD samples in unlabeled data, we estimate the probability of the sample belonging to OOD. We use a joint optimization framework, which updates the network parameters and the OOD score alternately. Simultaneously, to achieve high performance on the classification of in-distribution (ID) data, we select ID samples in unlabeled data having small OOD scores, and use these data with labeled data for training the deep neural networks to classify ID samples in a semi-supervised manner. We conduct several experiments, and our method achieves state-of-the-art results by successfully eliminating the effect of OOD samples.",True,True,"Qing Yu and Daiki Ikami and Go Irie and Kiyoharu Aizawa",2020.0,,,,,Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning,YU1ut/Multi-Task-Curriculum-Framework-for-Open-Set-SSL,https://github.com/YU1ut/Multi-Task-Curriculum-Framework-for-Open-Set-SSL,This is the official PyTorch implementation of Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning. architecture. Requirements. Python 3.7 "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_9,\cite{ossl_9},"Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for Open-Set Semi-Supervised Learning",http://arxiv.org/abs/2108.05617v1,"Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data. While the mainstream technique seeks to completely filter out the OOD samples for semi-supervised learning (SSL), we propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning while avoiding its adverse impact on the SSL. We achieve this goal by first introducing a warm-up training that leverages all the unlabeled data, including both the in-distribution (ID) and OOD samples. Specifically, we perform a pretext task that enforces our feature extractor to obtain a high-level semantic understanding of the training images, leading to more discriminative features that can benefit the downstream tasks. Since the OOD samples are inevitably detrimental to SSL, we propose a novel cross-modal matching strategy to detect OOD samples. Instead of directly applying binary classification, we train the network to predict whether the data sample is matched to an assigned one-hot class label. The appeal of the proposed cross-modal matching over binary classification is the ability to generate a compatible feature space that aligns with the core classification task. Extensive experiments show that our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.",True,True,"Junkai Huang and Chaowei Fang and Weikai Chen and Zhenhua Chai and Xiaolin Wei and Pengxu Wei and Liang Lin and Guanbin Li",2021.0,,,,,"Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for Open-Set Semi-Supervised Learning",[PDF] Harvesting OOD Data With Cross-Modal Matching for Open-Set ...,https://guanbinli.com/papers/4-Huang_Trash_To_Treasure_Harvesting_OOD_Data_With_Cross-Modal_Matching_for_ICCV_2021_paper.pdf,Open-set semi-supervised learning (open-set SSL) inves- tigates a challenging but practical scenario where out-of- distribution (OOD) samples are contained "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_10,\cite{ossl_10},Out-of-Distributed Semantic Pruning for Robust Semi-Supervised Learning,http://arxiv.org/abs/2305.18158v2,"Recent advances in robust semi-supervised learning (SSL) typically filter out-of-distribution (OOD) information at the sample level. We argue that an overlooked problem of robust SSL is its corrupted information on semantic level, practically limiting the development of the field. In this paper, we take an initial step to explore and propose a unified framework termed OOD Semantic Pruning (OSP), which aims at pruning OOD semantics out from in-distribution (ID) features. Specifically, (i) we propose an aliasing OOD matching module to pair each ID sample with an OOD sample with semantic overlap. (ii) We design a soft orthogonality regularization, which first transforms each ID feature by suppressing its semantic component that is collinear with paired OOD sample. It then forces the predictions before and after soft orthogonality decomposition to be consistent. Being practically simple, our method shows a strong performance in OOD detection and ID classification on challenging benchmarks. In particular, OSP surpasses the previous state-of-the-art by 13.7% on accuracy for ID classification and 5.9% on AUROC for OOD detection on TinyImageNet dataset. The source codes are publicly available at https://github.com/rain305f/OSP.",True,True,"Wang, Yu and Qiao, Pengchong and Liu, Chang and Song, Guoli and Zheng, Xiawu and Chen, Jie",2023.0,,,,,Out-of-Distributed Semantic Pruning for Robust Semi-Supervised Learning,[PDF] Out-of-Distributed Semantic Pruning for Robust Semi-Supervised ...,https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_Out-of-Distributed_Semantic_Pruning_for_Robust_Semi-Supervised_Learning_CVPR_2023_paper.pdf,Recent advances in robust semi-supervised learning. (SSL) typically filter out-of-distribution (OOD) information at the sample level. "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_8,\cite{ossl_8},"Unknown-Aware Graph Regularization for Robust Semi-supervised Learning from Uncurated Data",,,True,False,"Heejo Kong and Suneung Kim and Ho{-}Joong Kim and Seong{-}Whan Lee",2024.0,,,,,"Unknown-Aware Graph Regularization for Robust Semi-supervised Learning from Uncurated Data",Unknown-Aware Graph Regularization for Robust Semi- ...,https://www.researchgate.net/publication/379297624_Unknown-Aware_Graph_Regularization_for_Robust_Semi-supervised_Learning_from_Uncurated_Data,"In this paper, we propose a robust SSL method for learning from uncurated real-world data within the context of open-set semi-supervised learning (OSSL). Unlike" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_4,\cite{ossl_4},"OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers",http://arxiv.org/abs/2105.14148v2,"Semi-supervised learning (SSL) is an effective means to leverage unlabeled data to improve a model's performance. Typical SSL methods like FixMatch assume that labeled and unlabeled data share the same label space. However, in practice, unlabeled data can contain categories unseen in the labeled set, i.e., outliers, which can significantly harm the performance of SSL algorithms. To address this problem, we propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch. Learning representations of inliers while rejecting outliers is essential for the success of OSSL. To this end, OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers. The OVA-classifier outputs the confidence score of a sample being an inlier, providing a threshold to detect outliers. Another key contribution is an open-set soft-consistency regularization loss, which enhances the smoothness of the OVA-classifier with respect to input transformations and greatly improves outlier detection. OpenMatch achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.",True,True,"Saito, Kuniaki and Kim, Donghyun and Saenko, Kate",2021.0,,,,,"OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers",VisionLearningGroup/OP_Match,https://github.com/VisionLearningGroup/OP_Match,OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021) ... This is an PyTorch implementation of OpenMatch. This "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_7,\cite{ossl_7},"IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization",http://arxiv.org/abs/2308.13168v1,"Semi-supervised learning (SSL) aims to leverage massive unlabeled data when labels are expensive to obtain. Unfortunately, in many real-world applications, the collected unlabeled data will inevitably contain unseen-class outliers not belonging to any of the labeled classes. To deal with the challenging open-set SSL task, the mainstream methods tend to first detect outliers and then filter them out. However, we observe a surprising fact that such approach could result in more severe performance degradation when labels are extremely scarce, as the unreliable outlier detector may wrongly exclude a considerable portion of valuable inliers. To tackle with this issue, we introduce a novel open-set SSL framework, IOMatch, which can jointly utilize inliers and outliers, even when it is difficult to distinguish exactly between them. Specifically, we propose to employ a multi-binary classifier in combination with the standard closed-set classifier for producing unified open-set classification targets, which regard all outliers as a single new class. By adopting these targets as open-set pseudo-labels, we optimize an open-set classifier with all unlabeled samples including both inliers and outliers. Extensive experiments have shown that IOMatch significantly outperforms the baseline methods across different benchmark datasets and different settings despite its remarkable simplicity. Our code and models are available at https://github.com/nukezil/IOMatch.",True,True,"Zekun Li and Lei Qi and Yinghuan Shi and Yang Gao",2023.0,,,,,"IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization",[ICCV 2023 Oral] IOMatch: Simplifying Open-Set Semi-Supervised ...,https://github.com/nukezil/IOMatch,This is the official repository for our ICCV 2023 paper: IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization. "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_11,\cite{ossl_11},"SSB: Simple but Strong Baseline for Boosting Performance of Open-Set Semi-Supervised Learning",http://arxiv.org/abs/2311.10572v1,"Semi-supervised learning (SSL) methods effectively leverage unlabeled data to improve model generalization. However, SSL models often underperform in open-set scenarios, where unlabeled data contain outliers from novel categories that do not appear in the labeled set. In this paper, we study the challenging and realistic open-set SSL setting, where the goal is to both correctly classify inliers and to detect outliers. Intuitively, the inlier classifier should be trained on inlier data only. However, we find that inlier classification performance can be largely improved by incorporating high-confidence pseudo-labeled data, regardless of whether they are inliers or outliers. Also, we propose to utilize non-linear transformations to separate the features used for inlier classification and outlier detection in the multi-task learning framework, preventing adverse effects between them. Additionally, we introduce pseudo-negative mining, which further boosts outlier detection performance. The three ingredients lead to what we call Simple but Strong Baseline (SSB) for open-set SSL. In experiments, SSB greatly improves both inlier classification and outlier detection performance, outperforming existing methods by a large margin. Our code will be released at https://github.com/YUE-FAN/SSB.",True,True,"Fan, Yue and Kukleva, Anna and Dai, Dengxin and Schiele, Bernt",2023.0,,,,,"SSB: Simple but Strong Baseline for Boosting Performance of Open-Set Semi-Supervised Learning",SSB: Simple but Strong Baseline for Boosting Performance ...,https://ieeexplore.ieee.org/iel7/10376473/10376477/10377450.pdf,"by Y Fan · 2023 · Cited by 17 — Semi-supervised learning. (SSL) aims to improve model performance by exploiting both labeled and unlabeled data. As one of the most widely used techniques," "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_1,\cite{ossl_1},Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data,,,True,False,"Lan{-}Zhe Guo and Zhenyu Zhang and Yuan Jiang and Yufeng Li and Zhi{-}Hua Zhou",2020.0,,,,,Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data,[PDF] Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled ...,http://proceedings.mlr.press/v119/guo20i/guo20i.pdf,"Deep semi-supervised learning (SSL) is proposed to uti- lize a large number of cheap unlabeled data to help deep neural networks improve performance, reducing" "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_13,\cite{ossl_13},Binary Decomposition: A Problem Transformation Perspective for Open-Set Semi-Supervised Learning,,,True,False,"Hang, Jun-Yi and Zhang, Min-Ling",2024.0,,,,,Binary Decomposition: A Problem Transformation Perspective for Open-Set Semi-Supervised Learning,Binary decomposition | Proceedings of the 41st International ...,https://dl.acm.org/doi/10.5555/3692070.3692767,Binary decomposition: a problem transformation perspective for open-set semi-supervised learning. Computing methodologies · Machine learning. "Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers",2505.24443v1,ossl_17,\cite{ossl_17},"They are Not Completely Useless: Towards Recycling Transferable Unlabeled Data for Class-Mismatched Semi-Supervised Learning",http://arxiv.org/abs/2011.13529v4,"Semi-Supervised Learning (SSL) with mismatched classes deals with the problem that the classes-of-interests in the limited labeled data is only a subset of the classes in massive unlabeled data. As a result, the classes only possessed by the unlabeled data may mislead the classifier training and thus hindering the realistic landing of various SSL methods. To solve this problem, existing methods usually divide unlabeled data to in-distribution (ID) data and out-of-distribution (OOD) data, and directly discard or weaken the OOD data to avoid their adverse impact. In other words, they treat OOD data as completely useless and thus the potential valuable information for classification contained by them is totally ignored. To remedy this defect, this paper proposes a ""Transferable OOD data Recycling"" (TOOR) method which properly utilizes ID data as well as the ""recyclable"" OOD data to enrich the information for conducting class-mismatched SSL. Specifically, TOOR firstly attributes all unlabeled data to ID data or OOD data, among which the ID data are directly used for training. Then we treat the OOD data that have a close relationship with ID data and labeled data as recyclable, and employ adversarial domain adaptation to project them to the space of ID data and labeled data. In other words, the recyclability of an OOD datum is evaluated by its transferability, and the recyclable OOD data are transferred so that they are compatible with the distribution of known classes-of-interests. Consequently, our TOOR method extracts more information from unlabeled data than existing approaches, so it can achieve the improved performance which is demonstrated by the experiments on typical benchmark datasets.",True,True,"Huang, Zhuo and Yang, Jian and Gong, Chen",2022.0,,,,{IEEE} Trans. on Multimedia,"They are Not Completely Useless: Towards Recycling Transferable Unlabeled Data for Class-Mismatched Semi-Supervised Learning",Towards Recycling Transferable Unlabeled Data for Class ... - arXiv,https://arxiv.org/abs/2011.13529,They are Not Completely Useless: Towards Recycling Transferable Unlabeled Data for Class-Mismatched Semi-Supervised Learning. Authors:Zhuo Huang "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,liu2024deep,\cite{liu2024deep},Deep Industrial Image Anomaly Detection: A Survey,http://arxiv.org/abs/2301.11514v5,"The recent rapid development of deep learning has laid a milestone in industrial Image Anomaly Detection (IAD). In this paper, we provide a comprehensive review of deep learning-based image anomaly detection techniques, from the perspectives of neural network architectures, levels of supervision, loss functions, metrics and datasets. In addition, we extract the new setting from industrial manufacturing and review the current IAD approaches under our proposed our new setting. Moreover, we highlight several opening challenges for image anomaly detection. The merits and downsides of representative network architectures under varying supervision are discussed. Finally, we summarize the research findings and point out future research directions. More resources are available at https://github.com/M-3LAB/awesome-industrial-anomaly-detection.",True,True,"Liu, Jiaqi and Xie, Guoyang and Wang, Jinbao and Li, Shangnian and Wang, Chengjie and Zheng, Feng and Jin, Yaochu",2024.0,,,10.1109/cvpr52688.2022.01392,Machine Intelligence Research,Deep Industrial Image Anomaly Detection: A Survey,Deep Industrial Image Anomaly Detection: A Survey,http://arxiv.org/pdf/2301.11514v5,"The recent rapid development of deep learning has laid a milestone in industrial Image Anomaly Detection (IAD). In this paper, we provide a comprehensive review of deep learning-based image anomaly detection techniques, from the perspectives of neural network architectures, levels of supervision, loss functions, metrics and datasets. In addition, we extract the new setting from industrial manufacturing and review the current IAD approaches under our proposed our new setting. Moreover, we highlight several opening challenges for image anomaly detection. The merits and downsides of representative network architectures under varying supervision are discussed. Finally, we summarize the research findings and point out future research directions. More resources are available at https://github.com/M-3LAB/awesome-industrial-anomaly-detection." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,bergmann2019mvtec,\cite{bergmann2019mvtec},{MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection},,,True,False,"Bergmann, Paul and Fauser, Michael and Sattlegger, David and Steger, Carsten",2019.0,,,10.1007/978-3-031-20056-4_23,,{MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection},The MVTec Anomaly Detection Dataset - ACM Digital Library,https://dl.acm.org/doi/abs/10.1007/s11263-020-01400-4,(2019a). MVTec AD: A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE conference on computer vision and pattern "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,bergmann2018improving,\cite{bergmann2018improving},"Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders",http://arxiv.org/abs/1807.02011v3,"Convolutional autoencoders have emerged as popular methods for unsupervised defect segmentation on image data. Most commonly, this task is performed by thresholding a pixel-wise reconstruction error based on an $\ell^p$ distance. This procedure, however, leads to large residuals whenever the reconstruction encompasses slight localization inaccuracies around edges. It also fails to reveal defective regions that have been visually altered when intensity values stay roughly consistent. We show that these problems prevent these approaches from being applied to complex real-world scenarios and that it cannot be easily avoided by employing more elaborate architectures such as variational or feature matching autoencoders. We propose to use a perceptual loss function based on structural similarity which examines inter-dependencies between local image regions, taking into account luminance, contrast and structural information, instead of simply comparing single pixel values. It achieves significant performance gains on a challenging real-world dataset of nanofibrous materials and a novel dataset of two woven fabrics over the state of the art approaches for unsupervised defect segmentation that use pixel-wise reconstruction error metrics.",True,True,"Bergmann, Paul and Löwe, Sindy and Fauser, Michael and Sattlegger, David and Steger, Carsten",2019.0,,,,,"Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders",(PDF) Improving Unsupervised Defect Segmentation by Applying ...,https://www.researchgate.net/publication/331779705_Improving_Unsupervised_Defect_Segmentation_by_Applying_Structural_Similarity_to_Autoencoders,Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders ; Paul Bergmann at Technical University of Munich. Paul Bergmann. "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,liu2020towards,\cite{liu2020towards},Towards Visually Explaining Variational Autoencoders,http://arxiv.org/abs/1911.07389v7,"Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g. variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset.",True,True,"Liu, Wenqian and Li, Runze and Zheng, Meng and Karanam, Srikrishna and Wu, Ziyan and Bhanu, Bir and Radke, Richard J. and Camps, Octavia",2020.0,,,10.1007/978-3-030-20893-6_39,,Towards Visually Explaining Variational Autoencoders,Towards Visually Explaining Variational Autoencoders,http://arxiv.org/pdf/1911.07389v7,"Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g. variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,akcay2019ganomaly,\cite{akcay2019ganomaly},GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training,http://arxiv.org/abs/1805.06725v3,"Anomaly detection is a classical problem in computer vision, namely the determination of the normal from the abnormal when datasets are highly biased towards one class (normal) due to the insufficient sample size of the other class (abnormal). While this can be addressed as a supervised learning problem, a significantly more challenging problem is that of detecting the unknown/unseen anomaly case that takes us instead into the space of a one-class, semi-supervised learning paradigm. We introduce such a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space. Employing encoder-decoder-encoder sub-networks in the generator network enables the model to map the input image to a lower dimension vector, which is then used to reconstruct the generated output image. The use of the additional encoder network maps this generated image to its latent representation. Minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples. As a result, a larger distance metric from this learned data distribution at inference time is indicative of an outlier from that distribution - an anomaly. Experimentation over several benchmark datasets, from varying domains, shows the model efficacy and superiority over previous state-of-the-art approaches.",True,True,"Akcay, Samet and Atapour-Abarghouei, Amir and Breckon, Toby P.",2019.0,,,,,GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training,GANomaly Paper Review: Semi-Supervised Anomaly Detection via ...,https://towardsdatascience.com/ganomaly-paper-review-semi-supervised-anomaly-detection-via-adversarial-training-a6f7a64a265f/,GANomaly is an anomaly detection model that employs adversarial training to capture the data distribution. "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,damm2024anomalydino,\cite{damm2024anomalydino},AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2,http://arxiv.org/abs/2405.14529v3,"Recent advances in multimodal foundation models have set new standards in few-shot anomaly detection. This paper explores whether high-quality visual features alone are sufficient to rival existing state-of-the-art vision-language models. We affirm this by adapting DINOv2 for one-shot and few-shot anomaly detection, with a focus on industrial applications. We show that this approach does not only rival existing techniques but can even outmatch them in many settings. Our proposed vision-only approach, AnomalyDINO, follows the well-established patch-level deep nearest neighbor paradigm, and enables both image-level anomaly prediction and pixel-level anomaly segmentation. The approach is methodologically simple and training-free and, thus, does not require any additional data for fine-tuning or meta-learning. The approach is methodologically simple and training-free and, thus, does not require any additional data for fine-tuning or meta-learning. Despite its simplicity, AnomalyDINO achieves state-of-the-art results in one- and few-shot anomaly detection (e.g., pushing the one-shot performance on MVTec-AD from an AUROC of 93.1% to 96.6%). The reduced overhead, coupled with its outstanding few-shot performance, makes AnomalyDINO a strong candidate for fast deployment, e.g., in industrial contexts.",True,True,"Damm, Simon and Laszkiewicz, Mike and Lederer, Johannes and Fischer, Asja",2024.0,,,10.1561/0600000110,,AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2,[PDF] Boosting Patch-Based Few-Shot Anomaly Detection with DINOv2,https://openaccess.thecvf.com/content/WACV2025/papers/Damm_AnomalyDINO_Boosting_Patch-Based_Few-Shot_Anomaly_Detection_with_DINOv2_WACV_2025_paper.pdf,"Our approach, termed AnomalyDINO, follows the well- established AD framework of patch-level deep nearest neighbor [34, 46], and leverages DINOv2 [30] as a back-." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,roth2022towards,\cite{roth2022towards},Towards Total Recall in Industrial Anomaly Detection,http://arxiv.org/abs/2106.08265v2,"Being able to spot defective parts is a critical component in large-scale industrial manufacturing. A particular challenge that we address in this work is the cold-start problem: fit a model using nominal (non-defective) example images only. While handcrafted solutions per class are possible, the goal is to build systems that work well simultaneously on many different tasks automatically. The best performing approaches combine embeddings from ImageNet models with an outlier detection model. In this paper, we extend on this line of work and propose \textbf{PatchCore}, which uses a maximally representative memory bank of nominal patch-features. PatchCore offers competitive inference times while achieving state-of-the-art performance for both detection and localization. On the challenging, widely used MVTec AD benchmark PatchCore achieves an image-level anomaly detection AUROC score of up to $99.6\%$, more than halving the error compared to the next best competitor. We further report competitive results on two additional datasets and also find competitive results in the few samples regime.\freefootnote{$^*$ Work done during a research internship at Amazon AWS.} Code: github.com/amazon-research/patchcore-inspection.",True,True,"Roth, Karsten and Pemula, Latha and Zepeda, Joaquin and Scholkopf, Bernhard and Brox, Thomas and Gehler, Peter",2022.0,,,10.1109/cvpr52688.2022.00951,,Towards Total Recall in Industrial Anomaly Detection,Towards Total Recall in Industrial Anomaly Detection,http://arxiv.org/pdf/2106.08265v2,"Being able to spot defective parts is a critical component in large-scale industrial manufacturing. A particular challenge that we address in this work is the cold-start problem: fit a model using nominal (non-defective) example images only. While handcrafted solutions per class are possible, the goal is to build systems that work well simultaneously on many different tasks automatically. The best performing approaches combine embeddings from ImageNet models with an outlier detection model. In this paper, we extend on this line of work and propose \textbf{PatchCore}, which uses a maximally representative memory bank of nominal patch-features. PatchCore offers competitive inference times while achieving state-of-the-art performance for both detection and localization. On the challenging, widely used MVTec AD benchmark PatchCore achieves an image-level anomaly detection AUROC score of up to $99.6\%$, more than halving the error compared to the next best competitor. We further report competitive results on two additional datasets and also find competitive results in the few samples regime.\freefootnote{$^*$ Work done during a research internship at Amazon AWS.} Code: github.com/amazon-research/patchcore-inspection." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,jiang2022softpatch,\cite{jiang2022softpatch},SoftPatch: Unsupervised Anomaly Detection with Noisy Data,http://arxiv.org/abs/2403.14233v1,"Although mainstream unsupervised anomaly detection (AD) algorithms perform well in academic datasets, their performance is limited in practical application due to the ideal experimental setting of clean training data. Training with noisy data is an inevitable problem in real-world anomaly detection but is seldom discussed. This paper considers label-level noise in image sensory anomaly detection for the first time. To solve this problem, we proposed a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level. Noise discriminators are utilized to generate outlier scores for patch-level noise elimination before coreset construction. The scores are then stored in the memory bank to soften the anomaly detection boundary. Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset. Comprehensive experiments in various noise scenes demonstrate that SoftPatch outperforms the state-of-the-art AD methods on the MVTecAD and BTAD benchmarks and is comparable to those methods under the setting without noise.",True,True,"Jiang, Xi and Liu, Jianlin and Wang, Jinbao and Nie, Qiang and Wu, Kai and Liu, Yong and Wang, Chengjie and Zheng, Feng",2022.0,,,,,SoftPatch: Unsupervised Anomaly Detection with Noisy Data,SoftPatch: Unsupervised Anomaly Detection with Noisy Data,http://arxiv.org/pdf/2403.14233v1,"Although mainstream unsupervised anomaly detection (AD) algorithms perform well in academic datasets, their performance is limited in practical application due to the ideal experimental setting of clean training data. Training with noisy data is an inevitable problem in real-world anomaly detection but is seldom discussed. This paper considers label-level noise in image sensory anomaly detection for the first time. To solve this problem, we proposed a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level. Noise discriminators are utilized to generate outlier scores for patch-level noise elimination before coreset construction. The scores are then stored in the memory bank to soften the anomaly detection boundary. Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset. Comprehensive experiments in various noise scenes demonstrate that SoftPatch outperforms the state-of-the-art AD methods on the MVTecAD and BTAD benchmarks and is comparable to those methods under the setting without noise." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,li2024sam,\cite{li2024sam},A SAM-guided Two-stream Lightweight Model for Anomaly Detection,http://arxiv.org/abs/2402.19145v2,"In industrial anomaly detection, model efficiency and mobile-friendliness become the primary concerns in real-world applications. Simultaneously, the impressive generalization capabilities of Segment Anything (SAM) have garnered broad academic attention, making it an ideal choice for localizing unseen anomalies and diverse real-world patterns. In this paper, considering these two critical factors, we propose a SAM-guided Two-stream Lightweight Model for unsupervised anomaly detection (STLM) that not only aligns with the two practical application requirements but also harnesses the robust generalization capabilities of SAM. We employ two lightweight image encoders, i.e., our two-stream lightweight module, guided by SAM's knowledge. To be specific, one stream is trained to generate discriminative and general feature representations in both normal and anomalous regions, while the other stream reconstructs the same images without anomalies, which effectively enhances the differentiation of two-stream representations when facing anomalous regions. Furthermore, we employ a shared mask decoder and a feature aggregation module to generate anomaly maps. Our experiments conducted on MVTec AD benchmark show that STLM, with about 16M parameters and achieving an inference time in 20ms, competes effectively with state-of-the-art methods in terms of performance, 98.26% on pixel-level AUC and 94.92% on PRO. We further experiment on more difficult datasets, e.g., VisA and DAGM, to demonstrate the effectiveness and generalizability of STLM.",True,True,"Li, Chenghao and Qi, Lei and Geng, Xin",2025.0,,,10.1109/cvpr.2019.00982,"ACM Transactions on Multimedia Computing, Communications, and Applications",A SAM-guided Two-stream Lightweight Model for Anomaly Detection,A SAM-guided Two-stream Lightweight Model for Anomaly Detection,https://arxiv.org/html/2402.19145v1,"In this paper, we propose a novel framework called SAM-guided Two-stream Lightweight Model for unsupervised anomaly detection tasks." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,li2024multimodal,\cite{li2024multimodal},"Multimodal Foundation Models: From Specialists to General-Purpose Assistants",http://arxiv.org/abs/2309.10020v1,"This paper presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language capabilities, focusing on the transition from specialist models to general-purpose assistants. The research landscape encompasses five core topics, categorized into two classes. (i) We start with a survey of well-established research areas: multimodal foundation models pre-trained for specific purposes, including two topics -- methods of learning vision backbones for visual understanding and text-to-image generation. (ii) Then, we present recent advances in exploratory, open research areas: multimodal foundation models that aim to play the role of general-purpose assistants, including three topics -- unified vision models inspired by large language models (LLMs), end-to-end training of multimodal LLMs, and chaining multimodal tools with LLMs. The target audiences of the paper are researchers, graduate students, and professionals in computer vision and vision-language multimodal communities who are eager to learn the basics and recent advances in multimodal foundation models.",True,True,"Li, Chunyuan and Gan, Zhe and Yang, Zhengyuan and Yang, Jianwei and Li, Linjie and Wang, Lijuan and Gao, Jianfeng",2024.0,,,,Foundations and Trends in Computer Graphics and Vision,"Multimodal Foundation Models: From Specialists to General-Purpose Assistants",Multimodal Foundation Models: From Specialists to ...,https://www.nowpublishers.com/article/Details/CGV-110,by C Li · 2024 · Cited by 316 — This monograph presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,radford2021learning,\cite{radford2021learning},Learning Transferable Visual Models From Natural Language Supervision,http://arxiv.org/abs/2103.00020v1,"State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.",True,True,"Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and Krueger, Gretchen and Sutskever, Ilya",2021.0,,,,,Learning Transferable Visual Models From Natural Language Supervision,Learning Transferable Visual Models From Natural Language Supervision,http://arxiv.org/pdf/2103.00020v1,"State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,kirillov2023segment,\cite{kirillov2023segment},Segment Anything,http://arxiv.org/abs/2304.02643v1,"We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.",True,True,"Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Dollar, Piotr and Girshick, Ross",2023.0,,,10.1109/tip.2023.3293772,,Segment Anything,Segment Anything,http://arxiv.org/pdf/2304.02643v1,"We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,caron2021emerging,\cite{caron2021emerging},Emerging Properties in Self-Supervised Vision Transformers,http://arxiv.org/abs/2104.14294v2,"In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.",True,True,"Caron, Mathilde and Touvron, Hugo and Misra, Ishan and J\'egou, Herv\'e and Mairal, Julien and Bojanowski, Piotr and Joulin, Armand",2021.0,,,,,Emerging Properties in Self-Supervised Vision Transformers,[PDF] Emerging Properties in Self-Supervised Vision Transformers,https://openaccess.thecvf.com/content/ICCV2021/papers/Caron_Emerging_Properties_in_Self-Supervised_Vision_Transformers_ICCV_2021_paper.pdf,"Self-supervised ViT features contain semantic segmentation, scene layout, object boundaries, and perform well with k-NN classifiers, unlike supervised ViTs or" "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,oquab2023dinov2,\cite{oquab2023dinov2},DINOv2: Learning Robust Visual Features without Supervision,http://arxiv.org/abs/2304.07193v2,"The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.",True,True,Maxime Oquab and Timoth{\'e}e Darcet and Th{\'e}o Moutakanni and Huy V. Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel HAZIZA and Francisco Massa and Alaaeldin El-Nouby and Mido Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Herve Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski,2024.0,,,,Transactions on Machine Learning Research,DINOv2: Learning Robust Visual Features without Supervision,DINOv2: Learning Robust Visual Features without Supervision,http://arxiv.org/pdf/2304.07193v2,"The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,zhang2023faster,\cite{zhang2023faster},Faster Segment Anything: Towards Lightweight SAM for Mobile Applications,http://arxiv.org/abs/2306.14289v2,"Segment Anything Model (SAM) has attracted significant attention due to its impressive zero-shot transfer performance and high versatility for numerous vision applications (like image editing with fine-grained control). Many of such applications need to be run on resource-constraint edge devices, like mobile phones. In this work, we aim to make SAM mobile-friendly by replacing the heavyweight image encoder with a lightweight one. A naive way to train such a new SAM as in the original SAM paper leads to unsatisfactory performance, especially when limited training sources are available. We find that this is mainly caused by the coupled optimization of the image encoder and mask decoder, motivated by which we propose decoupled distillation. Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM. The training can be completed on a single GPU within less than one day, and the resulting lightweight SAM is termed MobileSAM which is more than 60 times smaller yet performs on par with the original SAM. For inference speed, With a single GPU, MobileSAM runs around 10ms per image: 8ms on the image encoder and 4ms on the mask decoder. With superior performance, our MobileSAM is around 5 times faster than the concurrent FastSAM and 7 times smaller, making it more suitable for mobile applications. Moreover, we show that MobileSAM can run relatively smoothly on CPU. The code for our project is provided at \href{https://github.com/ChaoningZhang/MobileSAM}{\textcolor{red}{MobileSAM}}), with a demo showing that MobileSAM can run relatively smoothly on CPU.",True,True,"Zhang, Chaoning and Han, Dongshen and Qiao, Yu and Kim, Jung Uk and Bae, Sung-Ho and Lee, Seungkyu and Hong, Choong Seon",2023.0,,,10.1109/iccv48922.2021.00822,arXiv preprint arXiv:2306.14289,Faster Segment Anything: Towards Lightweight SAM for Mobile Applications,Faster Segment Anything: Towards Lightweight SAM for Mobile Applications,http://arxiv.org/pdf/2306.14289v2,"Segment Anything Model (SAM) has attracted significant attention due to its impressive zero-shot transfer performance and high versatility for numerous vision applications (like image editing with fine-grained control). Many of such applications need to be run on resource-constraint edge devices, like mobile phones. In this work, we aim to make SAM mobile-friendly by replacing the heavyweight image encoder with a lightweight one. A naive way to train such a new SAM as in the original SAM paper leads to unsatisfactory performance, especially when limited training sources are available. We find that this is mainly caused by the coupled optimization of the image encoder and mask decoder, motivated by which we propose decoupled distillation. Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM. The training can be completed on a single GPU within less than one day, and the resulting lightweight SAM is termed MobileSAM which is more than 60 times smaller yet performs on par with the original SAM. For inference speed, With a single GPU, MobileSAM runs around 10ms per image: 8ms on the image encoder and 4ms on the mask decoder. With superior performance, our MobileSAM is around 5 times faster than the concurrent FastSAM and 7 times smaller, making it more suitable for mobile applications. Moreover, we show that MobileSAM can run relatively smoothly on CPU. The code for our project is provided at \href{https://github.com/ChaoningZhang/MobileSAM}{\textcolor{red}{MobileSAM}}), with a demo showing that MobileSAM can run relatively smoothly on CPU." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,capogrosso2024machine,\cite{capogrosso2024machine},A Machine Learning-oriented Survey on Tiny Machine Learning,http://arxiv.org/abs/2309.11932v2,"The emergence of Tiny Machine Learning (TinyML) has positively revolutionized the field of Artificial Intelligence by promoting the joint design of resource-constrained IoT hardware devices and their learning-based software architectures. TinyML carries an essential role within the fourth and fifth industrial revolutions in helping societies, economies, and individuals employ effective AI-infused computing technologies (e.g., smart cities, automotive, and medical robotics). Given its multidisciplinary nature, the field of TinyML has been approached from many different angles: this comprehensive survey wishes to provide an up-to-date overview focused on all the learning algorithms within TinyML-based solutions. The survey is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow, allowing for a systematic and complete literature survey. In particular, firstly we will examine the three different workflows for implementing a TinyML-based system, i.e., ML-oriented, HW-oriented, and co-design. Secondly, we propose a taxonomy that covers the learning panorama under the TinyML lens, examining in detail the different families of model optimization and design, as well as the state-of-the-art learning techniques. Thirdly, this survey will present the distinct features of hardware devices and software tools that represent the current state-of-the-art for TinyML intelligent edge applications. Finally, we discuss the challenges and future directions.",True,True,"Capogrosso, Luigi and Cunico, Federico and Cheng, Dong Seon and Fummi, Franco and Cristani, Marco",2024.0,,,10.1109/access.2022.3182659,IEEE Access,A Machine Learning-oriented Survey on Tiny Machine Learning,(PDF) A Machine Learning-Oriented Survey on Tiny Machine Learning,https://www.researchgate.net/publication/378163073_A_Machine_Learning-oriented_Survey_on_Tiny_Machine_Learning,This comprehensive survey wishes to provide an up-to-date overview focused on all the learning algorithms within TinyML-based solutions. "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,vadera2022methods,\cite{vadera2022methods},Methods for Pruning Deep Neural Networks,http://arxiv.org/abs/2011.00241v2,"This paper presents a survey of methods for pruning deep neural networks. It begins by categorising over 150 studies based on the underlying approach used and then focuses on three categories: methods that use magnitude based pruning, methods that utilise clustering to identify redundancy, and methods that use sensitivity analysis to assess the effect of pruning. Some of the key influencing studies within these categories are presented to highlight the underlying approaches and results achieved. Most studies present results which are distributed in the literature as new architectures, algorithms and data sets have developed with time, making comparison across different studied difficult. The paper therefore provides a resource for the community that can be used to quickly compare the results from many different methods on a variety of data sets, and a range of architectures, including AlexNet, ResNet, DenseNet and VGG. The resource is illustrated by comparing the results published for pruning AlexNet and ResNet50 on ImageNet and ResNet56 and VGG16 on the CIFAR10 data to reveal which pruning methods work well in terms of retaining accuracy whilst achieving good compression rates. The paper concludes by identifying some promising directions for future research.",True,True,"Vadera, Sunil and Ameen, Salem",2022.0,,,10.1201/9781003162810-13,IEEE Access,Methods for Pruning Deep Neural Networks,Methods for Pruning Deep Neural Networks,http://arxiv.org/pdf/2011.00241v2,"This paper presents a survey of methods for pruning deep neural networks. It begins by categorising over 150 studies based on the underlying approach used and then focuses on three categories: methods that use magnitude based pruning, methods that utilise clustering to identify redundancy, and methods that use sensitivity analysis to assess the effect of pruning. Some of the key influencing studies within these categories are presented to highlight the underlying approaches and results achieved. Most studies present results which are distributed in the literature as new architectures, algorithms and data sets have developed with time, making comparison across different studied difficult. The paper therefore provides a resource for the community that can be used to quickly compare the results from many different methods on a variety of data sets, and a range of architectures, including AlexNet, ResNet, DenseNet and VGG. The resource is illustrated by comparing the results published for pruning AlexNet and ResNet50 on ImageNet and ResNet56 and VGG16 on the CIFAR10 data to reveal which pruning methods work well in terms of retaining accuracy whilst achieving good compression rates. The paper concludes by identifying some promising directions for future research." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,gholami2022survey,\cite{gholami2022survey},A Survey of Quantization Methods for Efficient Neural Network Inference,http://arxiv.org/abs/2103.13630v3,"As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.",True,True,"Gholami, Amir and Kim, Sehoon and Dong, Zhen and Yao, Zhewei and Mahoney, Michael W. and Keutzer, Kurt",2022.0,,,10.1007/s11263-021-01453-z,,A Survey of Quantization Methods for Efficient Neural Network Inference,A Survey of Quantization Methods for Efficient Neural Network Inference,http://arxiv.org/pdf/2103.13630v3,"As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,gou2021knowledge,\cite{gou2021knowledge},Knowledge Distillation: A Survey,http://arxiv.org/abs/2006.05525v7,"In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. The great success of deep learning is mainly due to its scalability to encode large-scale data and to maneuver billions of model parameters. However, it is a challenge to deploy these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the high computational complexity but also the large storage requirements. To this end, a variety of model compression and acceleration techniques have been developed. As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model. It has received rapid increasing attention from the community. This paper provides a comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, teacher-student architecture, distillation algorithms, performance comparison and applications. Furthermore, challenges in knowledge distillation are briefly reviewed and comments on future research are discussed and forwarded.",True,True,"Gou, Jianping and Yu, Baosheng and Maybank, Stephen J. and Tao, Dacheng",2021.0,,,,International Journal of Computer Vision,Knowledge Distillation: A Survey,Knowledge Distillation: A Survey,http://arxiv.org/pdf/2006.05525v7,"In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. The great success of deep learning is mainly due to its scalability to encode large-scale data and to maneuver billions of model parameters. However, it is a challenge to deploy these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the high computational complexity but also the large storage requirements. To this end, a variety of model compression and acceleration techniques have been developed. As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model. It has received rapid increasing attention from the community. This paper provides a comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, teacher-student architecture, distillation algorithms, performance comparison and applications. Furthermore, challenges in knowledge distillation are briefly reviewed and comments on future research are discussed and forwarded." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,ren2021comprehensive,\cite{ren2021comprehensive},"A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions",http://arxiv.org/abs/2006.02903v3,"Deep learning has made breakthroughs and substantial in many fields due to its powerful automatic representation capabilities. It has been proven that neural architecture design is crucial to the feature representation of data and the final performance. However, the design of the neural architecture heavily relies on the researchers' prior knowledge and experience. And due to the limitations of human' inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal model. Therefore, an intuitive idea would be to reduce human intervention as much as possible and let the algorithm automatically design the neural architecture. Neural Architecture Search (NAS) is just such a revolutionary algorithm, and the related research work is complicated and rich. Therefore, a comprehensive and systematic survey on the NAS is essential. Previously related surveys have begun to classify existing work mainly based on the key components of NAS: search space, search strategy, and evaluation strategy. While this classification method is more intuitive, it is difficult for readers to grasp the challenges and the landmark work involved. Therefore, in this survey, we provide a new perspective: beginning with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms, and then providing solutions for subsequent related research work. Besides, we conduct a detailed and comprehensive analysis, comparison, and summary of these works. Finally, we provide some possible future research directions.",True,True,"Ren, Pengzhen and Xiao, Yun and Chang, Xiaojun and Huang, Po-yao and Li, Zhihui and Chen, Xiaojiang and Wang, Xin",2021.0,,,10.1109/tkde.2021.3126456,ACM Computing Surveys,"A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions",A quick look at NAS (Neural Architecture Search) - Welcome,https://gachiemchiep.github.io/machine%20learning/NAS-survey-2020/,On this page. 2020 NAS surveyr A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions. The current research results "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,brauwers2021general,\cite{brauwers2021general},A General Survey on Attention Mechanisms in Deep Learning,http://arxiv.org/abs/2203.14263v1,"Attention is an important mechanism that can be employed for a variety of deep learning models across many different domains and tasks. This survey provides an overview of the most important attention mechanisms proposed in the literature. The various attention mechanisms are explained by means of a framework consisting of a general attention model, uniform notation, and a comprehensive taxonomy of attention mechanisms. Furthermore, the various measures for evaluating attention models are reviewed, and methods to characterize the structure of attention models based on the proposed framework are discussed. Last, future work in the field of attention models is considered.",True,True,"Brauwers, Gianni and Frasincar, Flavius",2023.0,,,,IEEE Transactions on Knowledge and Data Engineering,A General Survey on Attention Mechanisms in Deep Learning,A General Survey on Attention Mechanisms in Deep Learning,http://arxiv.org/pdf/2203.14263v1,"Attention is an important mechanism that can be employed for a variety of deep learning models across many different domains and tasks. This survey provides an overview of the most important attention mechanisms proposed in the literature. The various attention mechanisms are explained by means of a framework consisting of a general attention model, uniform notation, and a comprehensive taxonomy of attention mechanisms. Furthermore, the various measures for evaluating attention models are reviewed, and methods to characterize the structure of attention models based on the proposed framework are discussed. Last, future work in the field of attention models is considered." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,vaswani2017attention,\cite{vaswani2017attention},Attention Is All You Need,http://arxiv.org/abs/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",True,True,"Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, {\L}ukasz and Polosukhin, Illia",2017.0,,,10.1145/3505244,,Attention Is All You Need,Attention Is All You Need,http://arxiv.org/pdf/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." "KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices",2505.24334v1,khan2022transformers,\cite{khan2022transformers},Transformers in Vision: A Survey,http://arxiv.org/abs/2101.01169v5,"Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works.",True,True,"Khan, Salman and Naseer, Muzammal and Hayat, Munawar and Zamir, Syed Waqas and Khan, Fahad Shahbaz and Shah, Mubarak",2022.0,,,10.1007/978-3-031-73209-6_1,ACM Computing Surveys,Transformers in Vision: A Survey,Transformers in Vision: A Survey,http://arxiv.org/pdf/2101.01169v5,"Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,TaylorKYMKRHM17,\cite{TaylorKYMKRHM17},A deep learning approach for generalized speech animation,,,True,False,"Sarah L. Taylor and Taehwan Kim and Yisong Yue and Moshe Mahler and James Krahe and Anastasio Garcia Rodriguez and Jessica K. Hodgins and Iain A. Matthews",2017.0,,,,TOG,A deep learning approach for generalized speech animation,[PDF] A Deep Learning Approach for Generalized Speech Animation - TTIC,https://home.ttic.edu/~taehwan/taylor_etal_siggraph2017.pdf,We introduce a simple and efective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech. "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,cao2005expressive,\cite{cao2005expressive},Expressive Speech-driven Facial Animation with controllable emotions,http://arxiv.org/abs/2301.02008v2,"It is in high demand to generate facial animation with high realism, but it remains a challenging task. Existing approaches of speech-driven facial animation can produce satisfactory mouth movement and lip synchronization, but show weakness in dramatic emotional expressions and flexibility in emotion control. This paper presents a novel deep learning-based approach for expressive facial animation generation from speech that can exhibit wide-spectrum facial expressions with controllable emotion type and intensity. We propose an emotion controller module to learn the relationship between the emotion variations (e.g., types and intensity) and the corresponding facial expression parameters. It enables emotion-controllable facial animation, where the target expression can be continuously adjusted as desired. The qualitative and quantitative evaluations show that the animation generated by our method is rich in facial emotional expressiveness while retaining accurate lip movement, outperforming other state-of-the-art methods.",True,True,"Cao, Yong and Tien, Wen C and Faloutsos, Petros and Pighin, Fr{\'e}d{\'e}ric",2005.0,,,,ACM TOG,Expressive Speech-driven Facial Animation with controllable emotions,Expressive Speech-driven Facial Animation with ...,https://github.com/on1262/facialanimation,EXPRESSIVE SPEECH-DRIVEN FACIAL ANIMATION WITH CONTROLLABLE EMOTIONS. Source code for: Expressive Speech-driven Facial Animation with controllable emotions. "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,FaceFormer,\cite{FaceFormer},FaceFormer: Speech-Driven 3D Facial Animation with Transformers,http://arxiv.org/abs/2112.05329v4,"Speech-driven 3D facial animation is challenging due to the complex geometry of human faces and the limited availability of 3D audio-visual data. Prior works typically focus on learning phoneme-level features of short audio windows with limited context, occasionally resulting in inaccurate lip movements. To tackle this limitation, we propose a Transformer-based autoregressive model, FaceFormer, which encodes the long-term audio context and autoregressively predicts a sequence of animated 3D face meshes. To cope with the data scarcity issue, we integrate the self-supervised pre-trained speech representations. Also, we devise two biased attention mechanisms well suited to this specific task, including the biased cross-modal multi-head (MH) attention and the biased causal MH self-attention with a periodic positional encoding strategy. The former effectively aligns the audio-motion modalities, whereas the latter offers abilities to generalize to longer audio sequences. Extensive experiments and a perceptual user study show that our approach outperforms the existing state-of-the-arts. The code will be made available.",True,True,"Yingruo Fan and Zhaojiang Lin and Jun Saito and Wenping Wang and Taku Komura",2022.0,,,,,FaceFormer: Speech-Driven 3D Facial Animation with Transformers,[PDF] FaceFormer: Speech-Driven 3D Facial Animation With Transformers,https://openaccess.thecvf.com/content/CVPR2022/papers/Fan_FaceFormer_Speech-Driven_3D_Facial_Animation_With_Transformers_CVPR_2022_paper.pdf,An autoregressive transformer-based architecture for speech-driven 3D facial animation. FaceFormer encodes the long-term audio context and the history of face "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,CodeTalker,\cite{CodeTalker},CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior,http://arxiv.org/abs/2301.02379v2,"Speech-driven 3D facial animation has been widely studied, yet there is still a gap to achieving realism and vividness due to the highly ill-posed nature and scarcity of audio-visual data. Existing works typically formulate the cross-modal mapping into a regression task, which suffers from the regression-to-mean problem leading to over-smoothed facial motions. In this paper, we propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook, which effectively promotes the vividness of the generated motions by reducing the cross-modal mapping uncertainty. The codebook is learned by self-reconstruction over real facial motions and thus embedded with realistic facial motion priors. Over the discrete motion space, a temporal autoregressive model is employed to sequentially synthesize facial motions from the input speech signal, which guarantees lip-sync as well as plausible facial expressions. We demonstrate that our approach outperforms current state-of-the-art methods both qualitatively and quantitatively. Also, a user study further justifies our superiority in perceptual quality.",True,True,"Jinbo Xing and Menghan Xia and Yuechen Zhang and Xiaodong Cun and Jue Wang and Tien{-}Tsin Wong",2023.0,,,,,CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior,Speech-Driven 3D Facial Animation with Discrete Motion Prior - arXiv,https://arxiv.org/abs/2301.02379,"In this paper, we propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,FaceDiffuser,\cite{FaceDiffuser},"FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion",http://arxiv.org/abs/2309.11306v1,"Speech-driven 3D facial animation synthesis has been a challenging task both in industry and research. Recent methods mostly focus on deterministic deep learning methods meaning that given a speech input, the output is always the same. However, in reality, the non-verbal facial cues that reside throughout the face are non-deterministic in nature. In addition, majority of the approaches focus on 3D vertex based datasets and methods that are compatible with existing facial animation pipelines with rigged characters is scarce. To eliminate these issues, we present FaceDiffuser, a non-deterministic deep learning model to generate speech-driven facial animations that is trained with both 3D vertex and blendshape based datasets. Our method is based on the diffusion technique and uses the pre-trained large speech representation model HuBERT to encode the audio input. To the best of our knowledge, we are the first to employ the diffusion method for the task of speech-driven 3D facial animation synthesis. We have run extensive objective and subjective analyses and show that our approach achieves better or comparable results in comparison to the state-of-the-art methods. We also introduce a new in-house dataset that is based on a blendshape based rigged character. We recommend watching the accompanying supplementary video. The code and the dataset will be publicly available.",True,True,"Stefan Stan and Kazi Injamamul Haque and Zerrin Yumak",2023.0,,,,,"FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion",Speech-Driven 3D Facial Animation Synthesis Using Diffusion,https://dl.acm.org/doi/10.1145/3623264.3624447,"We present FaceDiffuser, a non-deterministic deep learning model to generate speech-driven facial animations that is trained with both 3D vertex and blendshape" "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,li2023mask,\cite{li2023mask},Mask-fpan: Semi-supervised face parsing in the wild with de-occlusion and uv gan,,,True,False,"Li, Lei and Zhang, Tianfang and Kang, Zhongfeng and Jiang, Xikun",2023.0,,,,Computers \& Graphics,Mask-fpan: Semi-supervised face parsing in the wild with de-occlusion and uv gan,Mask-FPAN: Semi-Supervised Face Parsing in the Wild ...,https://arxiv.org/abs/2212.09098,by L Li · 2022 · Cited by 22 — We propose a novel framework termed Mask-FPAN. It uses a de-occlusion module that learns to parse occluded faces in a semi-supervised way. "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,haque2023facexhubert,\cite{haque2023facexhubert},"FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning",http://arxiv.org/abs/2303.05416v1,"This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method that allows to capture personalized and subtle cues in speech (e.g. identity, emotion and hesitation). It is also very robust to background noise and can handle audio recorded in a variety of situations (e.g. multiple people speaking). Recent approaches employ end-to-end deep learning taking into account both audio and text as input to generate facial animation for the whole face. However, scarcity of publicly available expressive audio-3D facial animation datasets poses a major bottleneck. The resulting animations still have issues regarding accurate lip-synching, expressivity, person-specific information and generalizability. We effectively employ self-supervised pretrained HuBERT model in the training process that allows us to incorporate both lexical and non-lexical information in the audio without using a large lexicon. Additionally, guiding the training with a binary emotion condition and speaker identity distinguishes the tiniest subtle facial motion. We carried out extensive objective and subjective evaluation in comparison to ground-truth and state-of-the-art work. A perceptual user study demonstrates that our approach produces superior results with respect to the realism of the animation 78% of the time in comparison to the state-of-the-art. In addition, our method is 4 times faster eliminating the use of complex sequential models such as transformers. We strongly recommend watching the supplementary video before reading the paper. We also provide the implementation and evaluation codes with a GitHub repository link.",True,True,"Haque, Kazi Injamamul and Yumak, Zerrin",2023.0,,,,,"FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning",Text-less Speech-driven E(X)pressive 3D Facial Animation ...,https://www.researchgate.net/publication/372492333_FaceXHuBERT_Text-less_Speech-driven_EXpressive_3D_Facial_Animation_Synthesis_Using_Self-Supervised_Speech_Representation_Learning,"This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method that allows us to capture facial cues related to emotional" "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,EMOTE,\cite{EMOTE},Emotional Speech-Driven Animation with Content-Emotion Disentanglement,http://arxiv.org/abs/2306.08990v2,"To be widely adopted, 3D facial avatars must be animated easily, realistically, and directly from speech signals. While the best recent methods generate 3D animations that are synchronized with the input audio, they largely ignore the impact of emotions on facial expressions. Realistic facial animation requires lip-sync together with the natural expression of emotion. To that end, we propose EMOTE (Expressive Model Optimized for Talking with Emotion), which generates 3D talking-head avatars that maintain lip-sync from speech while enabling explicit control over the expression of emotion. To achieve this, we supervise EMOTE with decoupled losses for speech (i.e., lip-sync) and emotion. These losses are based on two key observations: (1) deformations of the face due to speech are spatially localized around the mouth and have high temporal frequency, whereas (2) facial expressions may deform the whole face and occur over longer intervals. Thus, we train EMOTE with a per-frame lip-reading loss to preserve the speech-dependent content, while supervising emotion at the sequence level. Furthermore, we employ a content-emotion exchange mechanism in order to supervise different emotions on the same audio, while maintaining the lip motion synchronized with the speech. To employ deep perceptual losses without getting undesirable artifacts, we devise a motion prior in the form of a temporal VAE. Due to the absence of high-quality aligned emotional 3D face datasets with speech, EMOTE is trained with 3D pseudo-ground-truth extracted from an emotional video dataset (i.e., MEAD). Extensive qualitative and perceptual evaluations demonstrate that EMOTE produces speech-driven facial animations with better lip-sync than state-of-the-art methods trained on the same data, while offering additional, high-quality emotional control.",True,True,"Dan{\v{e}}{\v{c}}ek, Radek and Chhatre, Kiran and Tripathi, Shashank and Wen, Yandong and Black, Michael and Bolkart, Timo",2023.0,,,,,Emotional Speech-Driven Animation with Content-Emotion Disentanglement,Emotional Speech-Driven Animation with Content-Emotion ...,https://dl.acm.org/doi/10.1145/3610548.3618183,"We propose EMOTE (Expressive Model Optimized for Talking with Emotion), which generates 3D talking-head avatars that maintain lip-sync from speech." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,peng2023emotalk,\cite{peng2023emotalk},EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation,http://arxiv.org/abs/2303.11089v2,"Speech-driven 3D face animation aims to generate realistic facial expressions that match the speech content and emotion. However, existing methods often neglect emotional facial expressions or fail to disentangle them from speech content. To address this issue, this paper proposes an end-to-end neural network to disentangle different emotions in speech so as to generate rich 3D facial expressions. Specifically, we introduce the emotion disentangling encoder (EDE) to disentangle the emotion and content in the speech by cross-reconstructed speech signals with different emotion labels. Then an emotion-guided feature fusion decoder is employed to generate a 3D talking face with enhanced emotion. The decoder is driven by the disentangled identity, emotional, and content embeddings so as to generate controllable personal and emotional styles. Finally, considering the scarcity of the 3D emotional talking face data, we resort to the supervision of facial blendshapes, which enables the reconstruction of plausible 3D faces from 2D emotional data, and contribute a large-scale 3D emotional talking face dataset (3D-ETF) to train the network. Our experiments and user studies demonstrate that our approach outperforms state-of-the-art methods and exhibits more diverse facial movements. We recommend watching the supplementary video: https://ziqiaopeng.github.io/emotalk",True,True,"Peng, Ziqiao and Wu, Haoyu and Song, Zhenbo and Xu, Hao and Zhu, Xiangyu and He, Jun and Liu, Hongyan and Fan, Zhaoxin",2023.0,,,,,EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation,Speech-Driven Emotional Disentanglement for 3D Face Animation,https://arxiv.org/abs/2303.11089,This paper proposes an end-to-end neural network to disentangle different emotions in speech so as to generate rich 3D facial expressions. "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,thambiraja20233diface,\cite{thambiraja20233diface},3DiFACE: Diffusion-based Speech-driven 3D Facial Animation and Editing,http://arxiv.org/abs/2312.00870v1,"We present 3DiFACE, a novel method for personalized speech-driven 3D facial animation and editing. While existing methods deterministically predict facial animations from speech, they overlook the inherent one-to-many relationship between speech and facial expressions, i.e., there are multiple reasonable facial expression animations matching an audio input. It is especially important in content creation to be able to modify generated motion or to specify keyframes. To enable stochasticity as well as motion editing, we propose a lightweight audio-conditioned diffusion model for 3D facial motion. This diffusion model can be trained on a small 3D motion dataset, maintaining expressive lip motion output. In addition, it can be finetuned for specific subjects, requiring only a short video of the person. Through quantitative and qualitative evaluations, we show that our method outperforms existing state-of-the-art techniques and yields speech-driven animations with greater fidelity and diversity.",True,True,"Balamurugan Thambiraja and Sadegh Aliakbarian and Darren Cosker and Justus Thies",2023.0,,,,CoRR,3DiFACE: Diffusion-based Speech-driven 3D Facial Animation and Editing,[2312.00870] 3DiFACE: Diffusion-based Speech-driven 3D ...,https://arxiv.org/abs/2312.00870,"by B Thambiraja · 2023 · Cited by 18 — Abstract:We present 3DiFACE, a novel method for personalized speech-driven 3D facial animation and editing." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,VOCA,\cite{VOCA},"Capture, Learning, and Synthesis of 3D Speaking Styles",http://arxiv.org/abs/1905.03079v1,"Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.",True,True,"Daniel Cudeiro and Timo Bolkart and Cassidy Laidlaw and Anurag Ranjan and Michael J. Black",2019.0,,,,,"Capture, Learning, and Synthesis of 3D Speaking Styles","Capture, Learning, and Synthesis of 3D Speaking Styles",http://arxiv.org/pdf/1905.03079v1,"Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,LG-LDM,\cite{LG-LDM},Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion,,,True,False,"Song, Wenfeng and Wang, Xuan and Jiang, Yiming and Li, Shuai and Hao, Aimin and Hou, Xia and Qin, Hong",2024.0,,,,TVCG,Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion,wangxuanx/Face-Diffusion-Model: The official pytorch code ...,https://github.com/wangxuanx/Face-Diffusion-Model,Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion ... Our method generates realistic facial animations by syncing lips with "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,fu2024mimic,\cite{fu2024mimic},"Mimic: Speaking Style Disentanglement for Speech-Driven 3D Facial Animation",http://arxiv.org/abs/2312.10877v1,"Speech-driven 3D facial animation aims to synthesize vivid facial animations that accurately synchronize with speech and match the unique speaking style. However, existing works primarily focus on achieving precise lip synchronization while neglecting to model the subject-specific speaking style, often resulting in unrealistic facial animations. To the best of our knowledge, this work makes the first attempt to explore the coupled information between the speaking style and the semantic content in facial motions. Specifically, we introduce an innovative speaking style disentanglement method, which enables arbitrary-subject speaking style encoding and leads to a more realistic synthesis of speech-driven facial animations. Subsequently, we propose a novel framework called \textbf{Mimic} to learn disentangled representations of the speaking style and content from facial motions by building two latent spaces for style and content, respectively. Moreover, to facilitate disentangled representation learning, we introduce four well-designed constraints: an auxiliary style classifier, an auxiliary inverse classifier, a content contrastive loss, and a pair of latent cycle losses, which can effectively contribute to the construction of the identity-related style space and semantic-related content space. Extensive qualitative and quantitative experiments conducted on three publicly available datasets demonstrate that our approach outperforms state-of-the-art methods and is capable of capturing diverse speaking styles for speech-driven 3D facial animation. The source code and supplementary video are publicly available at: https://zeqing-wang.github.io/Mimic/",True,True,"Hui Fu and Zeqing Wang and Ke Gong and Keze Wang and Tianshui Chen and Haojie Li and Haifeng Zeng and Wenxiong Kang",2024.0,,,,,"Mimic: Speaking Style Disentanglement for Speech-Driven 3D Facial Animation",[PDF] Speaking Style Disentanglement for Speech-Driven 3D Facial ...,https://ojs.aaai.org/index.php/AAAI/article/view/27945/27910,"We propose Mimic for style-content disentanglement and synthesizing facial animations matching an identity-specific speaking style, as illustrated in Figure 2." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,wav2lip,\cite{wav2lip},"A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild",http://arxiv.org/abs/2008.10010v1,"In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or videos of specific people seen during the training phase. However, they fail to accurately morph the lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the new audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to accurately measure lip synchronization in unconstrained videos. Extensive quantitative evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated by our Wav2Lip model is almost as good as real synced videos. We provide a demo video clearly showing the substantial impact of our Wav2Lip model and evaluation benchmarks on our website: \url{cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild}. The code and models are released at this GitHub repository: \url{github.com/Rudrabha/Wav2Lip}. You can also try out the interactive demo at this link: \url{bhaasha.iiit.ac.in/lipsync}.",True,True,"K. R. Prajwal and Rudrabha Mukhopadhyay and Vinay P. Namboodiri and C. V. Jawahar",2020.0,,,,,"A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild",[2008.10010] A Lip Sync Expert Is All You Need for Speech ...,https://arxiv.org/abs/2008.10010,"**arXiv:2008.10010** (cs) View a PDF of the paper titled A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild, by K R Prajwal and 3 other authors (or arXiv:2008.10010v1 [cs.CV] for this version) View a PDF of the paper titled A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild, by K R Prajwal and 3 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle " "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,DBLP:conf/bmvc/ChenLLYW21,\cite{DBLP:conf/bmvc/ChenLLYW21},"Talking Head Generation with Audio and Speech Related Facial Action Units",,,True,False,"Sen Chen and Zhilei Liu and Jiaxing Liu and Zhengxiang Yan and Longbiao Wang",2021.0,,,,,"Talking Head Generation with Audio and Speech Related Facial Action Units",Talking Head Generation with Audio and Speech Related Facial ...,https://arxiv.org/abs/2110.09951,"In this paper, we propose a novel recurrent generative network that uses both audio and speech-related facial action units (AUs) as the driving information." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,DeepSpeech,\cite{DeepSpeech},Deep Speech: Scaling up end-to-end speech recognition,http://arxiv.org/abs/1412.5567v2,"We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a ""phoneme."" Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems.",True,True,"Awni Y. Hannun and Carl Case and Jared Casper and Bryan Catanzaro and Greg Diamos and Erich Elsen and Ryan Prenger and Sanjeev Satheesh and Shubho Sengupta and Adam Coates and Andrew Y. Ng",2014.0,,,,CoRR,Deep Speech: Scaling up end-to-end speech recognition,[PDF] Deep Speech: Scaling up end-to-end speech recognition - arXiv,https://arxiv.org/pdf/1412.5567,"Deep Speech is an end-to-end speech recognition system using deep learning, a simpler architecture, and a large RNN trained with multiple GPUs." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,wav2vec,\cite{wav2vec},"wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations",http://arxiv.org/abs/2006.11477v3,"We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.",True,True,"Alexei Baevski and Yuhao Zhou and Abdelrahman Mohamed and Michael Auli",2020.0,,,,,"wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations",wav2vec 2.0: A Framework for Self-Supervised Learning of Speech ...,https://arxiv.org/abs/2006.11477,wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,hubert,\cite{hubert},"HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units",http://arxiv.org/abs/2106.07447v1,"Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.",True,True,"Wei{-}Ning Hsu and Benjamin Bolte and Yao{-}Hung Hubert Tsai and Kushal Lakhotia and Ruslan Salakhutdinov and Abdelrahman Mohamed",2021.0,,,,ACM TASLP,"HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units",HuBERT: Self-Supervised Speech Representation Learning ... - arXiv,https://arxiv.org/abs/2106.07447,"We propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide" "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,ao2023gesturediffuclip,\cite{ao2023gesturediffuclip},GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents,http://arxiv.org/abs/2303.14613v4,"The automatic generation of stylized co-speech gestures has recently received increasing attention. Previous systems typically allow style control via predefined text labels or example motion clips, which are often not flexible enough to convey user intent accurately. In this work, we present GestureDiffuCLIP, a neural network framework for synthesizing realistic, stylized co-speech gestures with flexible style control. We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video. Our system learns a latent diffusion model to generate high-quality gestures and infuses the CLIP representations of style into the generator via an adaptive instance normalization (AdaIN) layer. We further devise a gesture-transcript alignment mechanism that ensures a semantically correct gesture generation based on contrastive learning. Our system can also be extended to allow fine-grained style control of individual body parts. We demonstrate an extensive set of examples showing the flexibility and generalizability of our model to a variety of style descriptions. In a user study, we show that our system outperforms the state-of-the-art approaches regarding human likeness, appropriateness, and style correctness.",True,True,"Ao, Tenglong and Zhang, Zeyi and Liu, Libin",2023.0,,,,ACM TOG,GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents,GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents,http://arxiv.org/pdf/2303.14613v4,"The automatic generation of stylized co-speech gestures has recently received increasing attention. Previous systems typically allow style control via predefined text labels or example motion clips, which are often not flexible enough to convey user intent accurately. In this work, we present GestureDiffuCLIP, a neural network framework for synthesizing realistic, stylized co-speech gestures with flexible style control. We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video. Our system learns a latent diffusion model to generate high-quality gestures and infuses the CLIP representations of style into the generator via an adaptive instance normalization (AdaIN) layer. We further devise a gesture-transcript alignment mechanism that ensures a semantically correct gesture generation based on contrastive learning. Our system can also be extended to allow fine-grained style control of individual body parts. We demonstrate an extensive set of examples showing the flexibility and generalizability of our model to a variety of style descriptions. In a user study, we show that our system outperforms the state-of-the-art approaches regarding human likeness, appropriateness, and style correctness." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,liang2024omg,\cite{liang2024omg},"OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers",http://arxiv.org/abs/2312.08985v3,"We have recently seen tremendous progress in realistic text-to-motion generation. Yet, the existing methods often fail or produce implausible motions with unseen text inputs, which limits the applications. In this paper, we present OMG, a novel framework, which enables compelling motion generation from zero-shot open-vocabulary text prompts. Our key idea is to carefully tailor the pretrain-then-finetune paradigm into the text-to-motion generation. At the pre-training stage, our model improves the generation ability by learning the rich out-of-domain inherent motion traits. To this end, we scale up a large unconditional diffusion model up to 1B parameters, so as to utilize the massive unlabeled motion data up to over 20M motion instances. At the subsequent fine-tuning stage, we introduce motion ControlNet, which incorporates text prompts as conditioning information, through a trainable copy of the pre-trained model and the proposed novel Mixture-of-Controllers (MoC) block. MoC block adaptively recognizes various ranges of the sub-motions with a cross-attention mechanism and processes them separately with the text-token-specific experts. Such a design effectively aligns the CLIP token embeddings of text prompts to various ranges of compact and expressive motion features. Extensive experiments demonstrate that our OMG achieves significant improvements over the state-of-the-art methods on zero-shot text-to-motion generation. Project page: https://tr3e.github.io/omg-page.",True,True,"Liang, Han and Bao, Jiacheng and Zhang, Ruichi and Ren, Sihan and Xu, Yuecheng and Yang, Sibei and Chen, Xin and Yu, Jingyi and Xu, Lan",2024.0,,,,,"OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers",[PDF] OMG: Towards Open-vocabulary Motion Generation via Mixture of ...,https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_OMG_Towards_Open-vocabulary_Motion_Generation_via_Mixture_of_Controllers_CVPR_2024_paper.pdf,"We propose a fine-tuning scheme for text conditioning, utilizing a mixture of controllers to effectively improve the alignment between text and motion. 2." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,zhang2022motiondiffuse,\cite{zhang2022motiondiffuse},MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model,http://arxiv.org/abs/2208.15001v1,"Human motion modeling is important for many modern graphics applications, which typically require professional skills. In order to remove the skill barriers for laymen, recent motion generation methods can directly generate human motions conditioned on natural languages. However, it remains challenging to achieve diverse and fine-grained motion generation with various text inputs. To address this problem, we propose MotionDiffuse, the first diffusion model-based text-driven motion generation framework, which demonstrates several desired properties over existing methods. 1) Probabilistic Mapping. Instead of a deterministic language-motion mapping, MotionDiffuse generates motions through a series of denoising steps in which variations are injected. 2) Realistic Synthesis. MotionDiffuse excels at modeling complicated data distribution and generating vivid motion sequences. 3) Multi-Level Manipulation. MotionDiffuse responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts. Our experiments show MotionDiffuse outperforms existing SoTA methods by convincing margins on text-driven motion generation and action-conditioned motion generation. A qualitative analysis further demonstrates MotionDiffuse's controllability for comprehensive motion generation. Homepage: https://mingyuan-zhang.github.io/projects/MotionDiffuse.html",True,True,"Mingyuan Zhang and Zhongang Cai and Liang Pan and Fangzhou Hong and Xinying Guo and Lei Yang and Ziwei Liu",2024.0,,,,TPAMI,MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model,Text-Driven Human Motion Generation With Diffusion Model,https://dl.acm.org/doi/abs/10.1109/TPAMI.2024.3355414,"MotionDiffuse responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,mughal2024convofusion,\cite{mughal2024convofusion},"ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis",http://arxiv.org/abs/2403.17936v1,"Gestures play a key role in human communication. Recent methods for co-speech gesture generation, while managing to generate beat-aligned motions, struggle generating gestures that are semantically aligned with the utterance. Compared to beat gestures that align naturally to the audio signal, semantically coherent gestures require modeling the complex interactions between the language and human motion, and can be controlled by focusing on certain words. Therefore, we present ConvoFusion, a diffusion-based approach for multi-modal gesture synthesis, which can not only generate gestures based on multi-modal speech inputs, but can also facilitate controllability in gesture synthesis. Our method proposes two guidance objectives that allow the users to modulate the impact of different conditioning modalities (e.g. audio vs text) as well as to choose certain words to be emphasized during gesturing. Our method is versatile in that it can be trained either for generating monologue gestures or even the conversational gestures. To further advance the research on multi-party interactive gestures, the DnD Group Gesture dataset is released, which contains 6 hours of gesture data showing 5 people interacting with one another. We compare our method with several recent works and demonstrate effectiveness of our method on a variety of tasks. We urge the reader to watch our supplementary video at our website.",True,True,"Mughal, Muhammad Hamza and Dabral, Rishabh and Habibie, Ikhsanul and Donatelli, Lucia and Habermann, Marc and Theobalt, Christian",2024.0,,,,,"ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis",Multi-Modal Conversational Diffusion for Co-Speech Gesture ... - arXiv,https://arxiv.org/abs/2403.17936,"We present ConvoFusion, a diffusion-based approach for multi-modal gesture synthesis, which can not only generate gestures based on multi-modal speech inputs." "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,zhao2024media2face,\cite{zhao2024media2face},"Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance",http://arxiv.org/abs/2401.15687v2,"The synthesis of 3D facial animations from speech has garnered considerable attention. Due to the scarcity of high-quality 4D facial data and well-annotated abundant multi-modality labels, previous methods often suffer from limited realism and a lack of lexible conditioning. We address this challenge through a trilogy. We first introduce Generalized Neural Parametric Facial Asset (GNPFA), an efficient variational auto-encoder mapping facial geometry and images to a highly generalized expression latent space, decoupling expressions and identities. Then, we utilize GNPFA to extract high-quality expressions and accurate head poses from a large array of videos. This presents the M2F-D dataset, a large, diverse, and scan-level co-speech 3D facial animation dataset with well-annotated emotional and style labels. Finally, we propose Media2Face, a diffusion model in GNPFA latent space for co-speech facial animation generation, accepting rich multi-modality guidances from audio, text, and image. Extensive experiments demonstrate that our model not only achieves high fidelity in facial animation synthesis but also broadens the scope of expressiveness and style adaptability in 3D facial animation.",True,True,"Qingcheng Zhao and Pengyu Long and Qixuan Zhang and Dafei Qin and Han Liang and Longwen Zhang and Yingliang Zhang and Jingyi Yu and Lan Xu",2024.0,,,,,"Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance",Co-speech Facial Animation Generation With Multi-Modality Guidance,https://arxiv.org/abs/2401.15687,"We propose Media2Face, a diffusion model in GNPFA latent space for co-speech facial animation generation, accepting rich multi-modality guidances from audio," "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,DBLP:conf/cvpr/ChhatreDABPBB24,\cite{DBLP:conf/cvpr/ChhatreDABPBB24},"Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion",http://arxiv.org/abs/2312.04466v2,"Existing methods for synthesizing 3D human gestures from speech have shown promising results, but they do not explicitly model the impact of emotions on the generated gestures. Instead, these methods directly output animations from speech without control over the expressed emotion. To address this limitation, we present AMUSE, an emotional speech-driven body animation model based on latent diffusion. Our observation is that content (i.e., gestures related to speech rhythm and word utterances), emotion, and personal style are separable. To account for this, AMUSE maps the driving audio to three disentangled latent vectors: one for content, one for emotion, and one for personal style. A latent diffusion model, trained to generate gesture motion sequences, is then conditioned on these latent vectors. Once trained, AMUSE synthesizes 3D human gestures directly from speech with control over the expressed emotions and style by combining the content from the driving speech with the emotion and style of another speech sequence. Randomly sampling the noise of the diffusion model further generates variations of the gesture with the same emotional expressivity. Qualitative, quantitative, and perceptual evaluations demonstrate that AMUSE outputs realistic gesture sequences. Compared to the state of the art, the generated gestures are better synchronized with the speech content, and better represent the emotion expressed by the input speech. Our code is available at amuse.is.tue.mpg.de.",True,True,"Kiran Chhatre and Radek Danecek and Nikos Athanasiou and Giorgio Becherini and Christopher E. Peters and Michael J. Black and Timo Bolkart",2024.0,,,,,"Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion",[2312.04466] Emotional Speech-driven 3D Body Animation via ...,https://arxiv.org/abs/2312.04466,"To account for this, AMUSE maps the driving audio to three disentangled latent vectors: one for content, one for emotion, and one for personal" "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,ElizaldeZR19,\cite{ElizaldeZR19},"Cross Modal Audio Search and Retrieval with Joint Embeddings Based on Text and Audio",,,True,False,"Benjamin Elizalde and Shuayb Zarar and Bhiksha Raj",2019.0,,,,,"Cross Modal Audio Search and Retrieval with Joint Embeddings Based on Text and Audio",Cross Modal Audio Search and Retrieval with Joint Embeddings ...,https://www.microsoft.com/en-us/research/publication/cross-modal-audio-search-and-retrieval-with-joint-embeddings-based-on-text-and-audio/,Missing: 04/08/2025 "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,Yu0L19,\cite{Yu0L19},"Mining Audio, Text and Visual Information for Talking Face Generation",,,True,False,"Lingyun Yu and Jun Yu and Qiang Ling",2019.0,,,,,"Mining Audio, Text and Visual Information for Talking Face Generation","Mining Audio, Text and Visual Information for Talking Face Generation",https://ieeexplore.ieee.org/document/8970886,"First, a multimodal learning method is proposed to generate accurate mouth landmarks with multimedia inputs (both text and audio). Second, a network named" "Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation",2505.23290v1,EMAGE,\cite{EMAGE},"EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling",http://arxiv.org/abs/2401.00374v5,"We propose EMAGE, a framework to generate full-body human gestures from audio and masked gestures, encompassing facial, local body, hands, and global movements. To achieve this, we first introduce BEAT2 (BEAT-SMPLX-FLAME), a new mesh-level holistic co-speech dataset. BEAT2 combines a MoShed SMPL-X body with FLAME head parameters and further refines the modeling of head, neck, and finger movements, offering a community-standardized, high-quality 3D motion captured dataset. EMAGE leverages masked body gesture priors during training to boost inference performance. It involves a Masked Audio Gesture Transformer, facilitating joint training on audio-to-gesture generation and masked gesture reconstruction to effectively encode audio and body gesture hints. Encoded body hints from masked gestures are then separately employed to generate facial and body movements. Moreover, EMAGE adaptively merges speech features from the audio's rhythm and content and utilizes four compositional VQ-VAEs to enhance the results' fidelity and diversity. Experiments demonstrate that EMAGE generates holistic gestures with state-of-the-art performance and is flexible in accepting predefined spatial-temporal gesture inputs, generating complete, audio-synchronized results. Our code and dataset are available https://pantomatrix.github.io/EMAGE/",True,True,"Haiyang Liu and Zihao Zhu and Giorgio Becherini and Yichen Peng and Mingyang Su and You Zhou and Xuefei Zhe and Naoya Iwamoto and Bo Zheng and Michael J. Black",2024.0,,,,,"EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling",EMAGE - CVPR 2024 Open Access Repository,https://openaccess.thecvf.com/content/CVPR2024/html/Liu_EMAGE_Towards_Unified_Holistic_Co-Speech_Gesture_Generation_via_Expressive_Masked_CVPR_2024_paper.html,"EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling. Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,RN5318,\cite{RN5318},"Snapshot Compressive Imaging: Principle, Implementation, Theory, Algorithms and Applications",http://arxiv.org/abs/2103.04421v1,"Capturing high-dimensional (HD) data is a long-term challenge in signal processing and related fields. Snapshot compressive imaging (SCI) uses a two-dimensional (2D) detector to capture HD ($\ge3$D) data in a {\em snapshot} measurement. Via novel optical designs, the 2D detector samples the HD data in a {\em compressive} manner; following this, algorithms are employed to reconstruct the desired HD data-cube. SCI has been used in hyperspectral imaging, video, holography, tomography, focal depth imaging, polarization imaging, microscopy, \etc.~Though the hardware has been investigated for more than a decade, the theoretical guarantees have only recently been derived. Inspired by deep learning, various deep neural networks have also been developed to reconstruct the HD data-cube in spectral SCI and video SCI. This article reviews recent advances in SCI hardware, theory and algorithms, including both optimization-based and deep-learning-based algorithms. Diverse applications and the outlook of SCI are also discussed.",True,True,"Yuan, Xin and Brady, David J. and Katsaggelos, Aggelos K.",2021.0,,,10.1109/msp.2020.3023869,IEEE Signal Processing Magazine,"Snapshot Compressive Imaging: Principle, Implementation, Theory, Algorithms and Applications","Snapshot Compressive Imaging: Theory, Algorithms, and ...",https://www.researchgate.net/publication/349697698_Snapshot_Compressive_Imaging_Theory_Algorithms_and_Applications,"Snapshot compressive imaging (SCI) uses a 2D detector to capture HD (>3D) data in a snapshot measurement. Via novel optical designs, the 2D detector samples the" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,wang2023full,\cite{wang2023full},Full-resolution and full-dynamic-range coded aperture compressive temporal imaging,,,True,False,"Wang, Ping and Wang, Lishun and Qiao, Mu and Yuan, Xin",2023.0,,,,Optics Letters,Full-resolution and full-dynamic-range coded aperture compressive temporal imaging,Full-resolution and full-dynamic-range coded aperture ...,https://opg.optica.org/abstract.cfm?uri=ol-48-18-4813,"by P Wang · 2023 · Cited by 9 — Coded aperture compressive temporal imaging (CACTI) aims to capture a sequence of video frames in a single shot, using an off-the-shelf 2D sensor." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,wang2024hierarchical,\cite{wang2024hierarchical},"Hierarchical Separable Video Transformer for Snapshot Compressive Imaging",http://arxiv.org/abs/2407.11946v2,"Transformers have achieved the state-of-the-art performance on solving the inverse problem of Snapshot Compressive Imaging (SCI) for video, whose ill-posedness is rooted in the mixed degradation of spatial masking and temporal aliasing. However, previous Transformers lack an insight into the degradation and thus have limited performance and efficiency. In this work, we tailor an efficient reconstruction architecture without temporal aggregation in early layers and Hierarchical Separable Video Transformer (HiSViT) as building block. HiSViT is built by multiple groups of Cross-Scale Separable Multi-head Self-Attention (CSS-MSA) and Gated Self-Modulated Feed-Forward Network (GSM-FFN) with dense connections, each of which is conducted within a separate channel portions at a different scale, for multi-scale interactions and long-range modeling. By separating spatial operations from temporal ones, CSS-MSA introduces an inductive bias of paying more attention within frames instead of between frames while saving computational overheads. GSM-FFN further enhances the locality via gated mechanism and factorized spatial-temporal convolutions. Extensive experiments demonstrate that our method outperforms previous methods by $\!>\!0.5$ dB with comparable or fewer parameters and complexity. The source codes and pretrained models are released at https://github.com/pwangcs/HiSViT.",True,True,"Wang, Ping and Zhang, Yulun and Wang, Lishun and Yuan, Xin",2024.0,,,,,"Hierarchical Separable Video Transformer for Snapshot Compressive Imaging",pwangcs/HiSViT: [ECCV 2024] Hierarchical Separable ...,https://github.com/pwangcs/HiSViT,"[ECCV 2024] Hierarchical Separable Video Transformer for Snapshot Compressive Imaging · Ping Wang, Yulun Zhang, Lishun Wang, Xin Yuan. Video SCI Reconstruction" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,wang2023deep,\cite{wang2023deep},Deep Optics for Video Snapshot Compressive Imaging,http://arxiv.org/abs/2404.05274v1,"Video snapshot compressive imaging (SCI) aims to capture a sequence of video frames with only a single shot of a 2D detector, whose backbones rest in optical modulation patterns (also known as masks) and a computational reconstruction algorithm. Advanced deep learning algorithms and mature hardware are putting video SCI into practical applications. Yet, there are two clouds in the sunshine of SCI: i) low dynamic range as a victim of high temporal multiplexing, and ii) existing deep learning algorithms' degradation on real system. To address these challenges, this paper presents a deep optics framework to jointly optimize masks and a reconstruction network. Specifically, we first propose a new type of structural mask to realize motion-aware and full-dynamic-range measurement. Considering the motion awareness property in measurement domain, we develop an efficient network for video SCI reconstruction using Transformer to capture long-term temporal dependencies, dubbed Res2former. Moreover, sensor response is introduced into the forward model of video SCI to guarantee end-to-end model training close to real system. Finally, we implement the learned structural masks on a digital micro-mirror device. Experimental results on synthetic and real data validate the effectiveness of the proposed framework. We believe this is a milestone for real-world video SCI. The source code and data are available at https://github.com/pwangcs/DeepOpticsSCI.",True,True,"Wang, Ping and Wang, Lishun and Yuan, Xin",2023.0,,,,,Deep Optics for Video Snapshot Compressive Imaging,Deep Optics for Video Snapshot Compressive Imaging,http://arxiv.org/pdf/2404.05274v1,"Video snapshot compressive imaging (SCI) aims to capture a sequence of video frames with only a single shot of a 2D detector, whose backbones rest in optical modulation patterns (also known as masks) and a computational reconstruction algorithm. Advanced deep learning algorithms and mature hardware are putting video SCI into practical applications. Yet, there are two clouds in the sunshine of SCI: i) low dynamic range as a victim of high temporal multiplexing, and ii) existing deep learning algorithms' degradation on real system. To address these challenges, this paper presents a deep optics framework to jointly optimize masks and a reconstruction network. Specifically, we first propose a new type of structural mask to realize motion-aware and full-dynamic-range measurement. Considering the motion awareness property in measurement domain, we develop an efficient network for video SCI reconstruction using Transformer to capture long-term temporal dependencies, dubbed Res2former. Moreover, sensor response is introduced into the forward model of video SCI to guarantee end-to-end model training close to real system. Finally, we implement the learned structural masks on a digital micro-mirror device. Experimental results on synthetic and real data validate the effectiveness of the proposed framework. We believe this is a milestone for real-world video SCI. The source code and data are available at https://github.com/pwangcs/DeepOpticsSCI." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,figueiredo2007gradient,\cite{figueiredo2007gradient},Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,,,True,False,"Figueiredo, M{\'a}rio AT and Nowak, Robert D and Wright, Stephen J",2007.0,,,,IEEE Journal of Selected Topics in Signal Processing,Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,Gradient Projection for Sparse Reconstruction: Application ...,https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a5a5f31a9d521db9566db94410b06defbbd40c22,"by MAT Figueiredo · Cited by 4600 — Gradient projection (GP) algorithms are proposed for sparse reconstruction in signal processing, using bound-constrained quadratic programming, and are faster" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,4587391,\cite{4587391},An efficient algorithm for compressed MR imaging using total variation and wavelets,,,True,False,"Shiqian Ma and Wotao Yin and Yin Zhang and Chakraborty, Amit",2008.0,,,,,An efficient algorithm for compressed MR imaging using total variation and wavelets,Compressed MRI reconstruction exploiting a rotation-invariant total ...,https://www.sciencedirect.com/science/article/abs/pii/S0730725X19307507,An efficient algorithm for compressed MR imaging using total variation and wavelets. M. Lustig et al. Compressed sensing MRI. IEEE Signal Processing Magazine. "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,he2009exploiting,\cite{he2009exploiting},Exploiting structure in wavelet-based Bayesian compressive sensing,,,True,False,"He, Lihan and Carin, Lawrence",2009.0,,,,IEEE Transactions on Signal Processing,Exploiting structure in wavelet-based Bayesian compressive sensing,Exploiting structure in wavelet-based Bayesian compressive sensing,https://dl.acm.org/doi/abs/10.1109/tsp.2009.2022003,The structure exploited within the wavelet coefficients is consistent with that used in wavelet-based compression algorithms. A hierarchical Bayesian model is "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,blumensath2009iterative,\cite{blumensath2009iterative},Iterative Hard Thresholding for Compressed Sensing,http://arxiv.org/abs/0805.0510v1,"Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) - It gives near-optimal error guarantees. - It is robust to observation noise. - It succeeds with a minimum number of observations. - It can be used with any sampling operator for which the operator and its adjoint can be computed. - The memory requirement is linear in the problem size. - Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. - It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. - Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",True,True,"Blumensath, Thomas and Davies, Mike E",2009.0,,,,Applied and Computational Harmonic Analysis,Iterative Hard Thresholding for Compressed Sensing,Iterative Hard Thresholding for Compressed Sensing,http://arxiv.org/pdf/0805.0510v1,"Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) - It gives near-optimal error guarantees. - It is robust to observation noise. - It succeeds with a minimum number of observations. - It can be used with any sampling operator for which the operator and its adjoint can be computed. - The memory requirement is linear in the problem size. - Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. - It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. - Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,beck2009fast,\cite{beck2009fast},A fast iterative shrinkage-thresholding algorithm for linear inverse problems,,,True,False,"Beck, Amir and Teboulle, Marc",2009.0,,,,SIAM Journal on Imaging Sciences,A fast iterative shrinkage-thresholding algorithm for linear inverse problems,[PDF] A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse ...,https://www.ceremade.dauphine.fr/~carlier/FISTA,Abstract. We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,kim2010compressed,\cite{kim2010compressed},Compressed sensing using a Gaussian scale mixtures model in wavelet domain,,,True,False,"Kim, Yookyung and Nadar, Mariappan S and Bilgin, Ali",2010.0,,,,,Compressed sensing using a Gaussian scale mixtures model in wavelet domain,Compressed Sensing With a Gaussian Scale Mixture ...,https://pmc.ncbi.nlm.nih.gov/articles/PMC6207971/,"by J Meng · 2018 · Cited by 11 — In this method, the structure dependencies of signals in the wavelet domain were incorporated into the imaging framework through the Gaussian scale mixture" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,yang2011alternating,\cite{yang2011alternating},Alternating Direction Algorithms for {$\ell_{1}$}-Problems in Compressive Sensing,,,True,False,"Yang, Junfeng and Zhang, Yin",2011.0,,,,SIAM Journal on Scientific Computing,Alternating Direction Algorithms for {$\ell_{1}$}-Problems in Compressive Sensing,[PDF] alternating direction algorithms for `1-problems in compressive ...,https://www.cmor-faculty.rice.edu/~zhang/reports/tr0937.pdf,"In this paper, we propose and study the use of alternating direction algorithms for several `1-norm minimization problems arising from sparse solution recovery" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,dong2014compressive,\cite{dong2014compressive},Compressive sensing via nonlocal low-rank regularization,,,True,False,"Dong, Weisheng and Shi, Guangming and Li, Xin and Ma, Yi and Huang, Feng",2014.0,,,,IEEE Transactions on Image Processing,Compressive sensing via nonlocal low-rank regularization,[PDF] Compressive Sensing via Nonlocal Low-rank Regularization,http://people.eecs.berkeley.edu/~yima/psfile/CS_low_rank_final.pdf,Experimental results have shown that the proposed NLR-CS algorithm can significantly outperform existing state-of-the-art CS techniques for image recovery. "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,Metzler2016FromDT,\cite{Metzler2016FromDT},From Denoising to Compressed Sensing,http://arxiv.org/abs/1406.4175v5,"A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, today's denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called Denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high performance denoiser for natural images, D-AMP offers state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.",True,True,"Metzler, Christopher A and Maleki, Arian and Baraniuk, Richard G",2016.0,,,,IEEE Transactions on Information Theory,From Denoising to Compressed Sensing,From Denoising to Compressed Sensing,http://arxiv.org/pdf/1406.4175v5,"A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, today's denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called Denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high performance denoiser for natural images, D-AMP offers state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,zhang2021plug,\cite{zhang2021plug},Deep Plug-and-Play Prior for Hyperspectral Image Restoration,http://arxiv.org/abs/2209.08240v1,"Deep-learning-based hyperspectral image (HSI) restoration methods have gained great popularity for their remarkable performance but often demand expensive network retraining whenever the specifics of task changes. In this paper, we propose to restore HSIs in a unified approach with an effective plug-and-play method, which can jointly retain the flexibility of optimization-based methods and utilize the powerful representation capability of deep neural networks. Specifically, we first develop a new deep HSI denoiser leveraging gated recurrent convolution units, short- and long-term skip connections, and an augmented noise level map to better exploit the abundant spatio-spectral information within HSIs. It, therefore, leads to the state-of-the-art performance on HSI denoising under both Gaussian and complex noise settings. Then, the proposed denoiser is inserted into the plug-and-play framework as a powerful implicit HSI prior to tackle various HSI restoration tasks. Through extensive experiments on HSI super-resolution, compressed sensing, and inpainting, we demonstrate that our approach often achieves superior performance, which is competitive with or even better than the state-of-the-art on each task, via a single model without any task-specific training.",True,True,"Zhang, Kai and Li, Yawei and Zuo, Wangmeng and Zhang, Lei and Van Gool, Luc and Timofte, Radu",2021.0,,,,IEEE Transactions on Pattern Analysis and Machine Intelligence,Deep Plug-and-Play Prior for Hyperspectral Image Restoration,Deep Plug-and-Play Prior for Hyperspectral Image Restoration,https://www.researchgate.net/publication/363667470_Deep_Plug-and-Play_Prior_for_Hyperspectral_Image_Restoration,"In this paper, we propose to restore HSIs in a unified approach with an effective plug-and-play method, which can jointly retain the flexibility" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,hurault2022gradient,\cite{hurault2022gradient},Gradient Step Denoiser for convergent Plug-and-Play,http://arxiv.org/abs/2110.03220v2,"Plug-and-Play methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. Although Plug-and-Play methods can lead to tremendous visual performance for various image problems, the few existing convergence guarantees are based on unrealistic (or suboptimal) hypotheses on the denoiser, or limited to strongly convex data terms. In this work, we propose a new type of Plug-and-Play methods, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step on a functional parameterized by a deep neural network. Exploiting convergence results for proximal gradient descent algorithms in the non-convex setting, we show that the proposed Plug-and-Play algorithm is a convergent iterative scheme that targets stationary points of an explicit global functional. Besides, experiments show that it is possible to learn such a deep denoiser while not compromising the performance in comparison to other state-of-the-art deep denoisers used in Plug-and-Play schemes. We apply our proximal gradient algorithm to various ill-posed inverse problems, e.g. deblurring, super-resolution and inpainting. For all these applications, numerical results empirically confirm the convergence results. Experiments also show that this new algorithm reaches state-of-the-art performance, both quantitatively and qualitatively.",True,True,"Hurault, Samuel and Leclaire, Arthur and Papadakis, Nicolas",2022.0,,,,,Gradient Step Denoiser for convergent Plug-and-Play,[2110.03220] Gradient Step Denoiser for convergent Plug-and-Play,https://arxiv.org/abs/2110.03220,"We propose a new type of Plug-and-Play methods, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,hurault2022proximal,\cite{hurault2022proximal},Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization,,,True,False,"Hurault, Samuel and Leclaire, Arthur and Papadakis, Nicolas",2022.0,,,,,Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization,[PDF] Proximal Denoiser for Convergent Plug-and-Play Optimization with ...,https://icml.cc/media/icml-2022/Slides/18135.pdf,"Proximal Denoiser for Convergent. Plug-and-Play Optimization with Nonconvex. Regularization. Samuel Hurault, Arthur Leclaire, Nicolas Papadakis. Institut de" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,fangs,\cite{fangs},What's in a Prior? Learned Proximal Networks for Inverse Problems,http://arxiv.org/abs/2310.14344v2,"Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data.",True,True,"Fang, Zhenghan and Buchanan, Sam and Sulam, Jeremias",,,,,,What's in a Prior? Learned Proximal Networks for Inverse Problems,What's in a Prior? Learned Proximal Networks for Inverse Problems,http://arxiv.org/pdf/2310.14344v2,"Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,hu2024stochastic,\cite{hu2024stochastic},Stochastic Deep Restoration Priors for Imaging Inverse Problems,http://arxiv.org/abs/2410.02057v1,"Deep neural networks trained as image denoisers are widely used as priors for solving imaging inverse problems. While Gaussian denoising is thought sufficient for learning image priors, we show that priors from deep models pre-trained as more general restoration operators can perform better. We introduce Stochastic deep Restoration Priors (ShaRP), a novel method that leverages an ensemble of such restoration models to regularize inverse problems. ShaRP improves upon methods using Gaussian denoiser priors by better handling structured artifacts and enabling self-supervised training even without fully sampled data. We prove ShaRP minimizes an objective function involving a regularizer derived from the score functions of minimum mean square error (MMSE) restoration operators, and theoretically analyze its convergence. Empirically, ShaRP achieves state-of-the-art performance on tasks such as magnetic resonance imaging reconstruction and single-image super-resolution, surpassing both denoiser-and diffusion-model-based methods without requiring retraining.",True,True,"Hu, Yuyang and Peng, Albert and Gan, Weijie and Milanfar, Peyman and Delbracio, Mauricio and Kamilov, Ulugbek S",2024.0,,,,arXiv preprint arXiv:2410.02057,Stochastic Deep Restoration Priors for Imaging Inverse Problems,Stochastic Deep Restoration Priors for Imaging Inverse Problems,http://arxiv.org/pdf/2410.02057v1,"Deep neural networks trained as image denoisers are widely used as priors for solving imaging inverse problems. While Gaussian denoising is thought sufficient for learning image priors, we show that priors from deep models pre-trained as more general restoration operators can perform better. We introduce Stochastic deep Restoration Priors (ShaRP), a novel method that leverages an ensemble of such restoration models to regularize inverse problems. ShaRP improves upon methods using Gaussian denoiser priors by better handling structured artifacts and enabling self-supervised training even without fully sampled data. We prove ShaRP minimizes an objective function involving a regularizer derived from the score functions of minimum mean square error (MMSE) restoration operators, and theoretically analyze its convergence. Empirically, ShaRP achieves state-of-the-art performance on tasks such as magnetic resonance imaging reconstruction and single-image super-resolution, surpassing both denoiser-and diffusion-model-based methods without requiring retraining." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,kulkarni2016reconnet,\cite{kulkarni2016reconnet},"ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Random Measurements",http://arxiv.org/abs/1601.06892v2,"The goal of this paper is to present a non-iterative and more importantly an extremely fast algorithm to reconstruct images from compressively sensed (CS) random measurements. To this end, we propose a novel convolutional neural network (CNN) architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction. We call this network, ReconNet. The intermediate reconstruction is fed into an off-the-shelf denoiser to obtain the final reconstructed image. On a standard dataset of images we show significant improvements in reconstruction results (both in terms of PSNR and time complexity) over state-of-the-art iterative CS reconstruction algorithms at various measurement rates. Further, through qualitative experiments on real data collected using our block single pixel camera (SPC), we show that our network is highly robust to sensor noise and can recover visually better quality images than competitive algorithms at extremely low sensing rates of 0.1 and 0.04. To demonstrate that our algorithm can recover semantically informative images even at a low measurement rate of 0.01, we present a very robust proof of concept real-time visual tracking application.",True,True,"Kulkarni, Kuldeep and Lohit, Suhas and Turaga, Pavan and Kerviche, Ronan and Ashok, Amit",2016.0,,,,,"ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Random Measurements",ReconNet: Non-Iterative Reconstruction of Images From ...,https://openaccess.thecvf.com/content_cvpr_2016/papers/Kulkarni_ReconNet_Non-Iterative_Reconstruction_CVPR_2016_paper.pdf,"by K Kulkarni · 2016 · Cited by 941 — ReconNet is a non-iterative, fast CNN algorithm that reconstructs images from compressively sensed measurements, using a novel CNN architecture." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,shi2019image,\cite{shi2019image},Image compressed sensing using convolutional neural network,,,True,False,"Shi, Wuzhen and Jiang, Feng and Liu, Shaohui and Zhao, Debin",2019.0,,,,IEEE Transactions on Image Processing,Image compressed sensing using convolutional neural network,inofficialamanjha/Image-Compressed-Sensing-using- ...,https://github.com/inofficialamanjha/Image-Compressed-Sensing-using-convolutional-Neural-Network,"We have implemented an image CS framework using Convolutional Neural Network (CSNet), that includes a sampling network and a reconstruction network, which are" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,shi2019scalable,\cite{shi2019scalable},Scalable convolutional neural network for image compressed sensing,,,True,False,"Shi, Wuzhen and Jiang, Feng and Liu, Shaohui and Zhao, Debin",2019.0,,,,,Scalable convolutional neural network for image compressed sensing,Scalable Convolutional Neural Network for Image ...,https://openaccess.thecvf.com/content_CVPR_2019/papers/Shi_Scalable_Convolutional_Neural_Network_for_Image_Compressed_Sensing_CVPR_2019_paper.pdf,"by W Shi · 2019 · Cited by 205 — compressed sensing. SCSNet is the first to implement s- calable sampling and scalable reconstruction using CNN, which provides both coarse granular scalability" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,yao2019dr2,\cite{yao2019dr2},Dr2-net: Deep residual reconstruction network for image compressive sensing,,,True,False,"Yao, Hantao and Dai, Feng and Zhang, Shiliang and Zhang, Yongdong and Tian, Qi and Xu, Changsheng",2019.0,,,,Neurocomputing,Dr2-net: Deep residual reconstruction network for image compressive sensing,DR2-Net: Deep Residual Reconstruction Network for Image Compressive Sensing,http://arxiv.org/pdf/1702.05743v4,"Most traditional algorithms for compressive sensing image reconstruction suffer from the intensive computation. Recently, deep learning-based reconstruction algorithms have been reported, which dramatically reduce the time complexity than iterative reconstruction algorithms. In this paper, we propose a novel \textbf{D}eep \textbf{R}esidual \textbf{R}econstruction Network (DR$^{2}$-Net) to reconstruct the image from its Compressively Sensed (CS) measurement. The DR$^{2}$-Net is proposed based on two observations: 1) linear mapping could reconstruct a high-quality preliminary image, and 2) residual learning could further improve the reconstruction quality. Accordingly, DR$^{2}$-Net consists of two components, \emph{i.e.,} linear mapping network and residual network, respectively. Specifically, the fully-connected layer in neural network implements the linear mapping network. We then expand the linear mapping network to DR$^{2}$-Net by adding several residual learning blocks to enhance the preliminary image. Extensive experiments demonstrate that the DR$^{2}$-Net outperforms traditional iterative methods and recent deep learning-based methods by large margins at measurement rates 0.01, 0.04, 0.1, and 0.25, respectively. The code of DR$^{2}$-Net has been released on: https://github.com/coldrainyht/caffe\_dr2" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,metzler2017learned,\cite{metzler2017learned},Learned D-AMP: Principled Neural Network based Compressive Image Recovery,,,True,False,"Metzler, Chris and Mousavi, Ali and Baraniuk, Richard",2017.0,,,,,Learned D-AMP: Principled Neural Network based Compressive Image Recovery,Learned D-AMP: Principled Neural Network based Compressive Image Recovery,http://arxiv.org/pdf/1704.06625v4,"Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be ""unrolled"" to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over $50\times$ faster than BM3D-AMP and hundreds of times faster than NLR-CS." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,zhang2018ista,\cite{zhang2018ista},ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing,,,True,False,"Zhang, Jian and Ghanem, Bernard",2018.0,,,,,ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing,ISTA-Net: Interpretable Optimization-Inspired Deep Network for ...,https://ieeexplore.ieee.org/iel7/8576498/8578098/08578294.pdf,"ISTA-Net is a structured deep network inspired by ISTA for image compressive sensing, combining traditional and network-based methods, with learned parameters." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,yang2018admm,\cite{yang2018admm},ADMM-CSNet: A deep learning approach for image compressive sensing,,,True,False,"Yang, Yan and Sun, Jian and Li, Huibin and Xu, Zongben",2018.0,,,,IEEE Transactions on Pattern Analysis and Machine Intelligence,ADMM-CSNet: A deep learning approach for image compressive sensing,ADMM-CSNet: A Deep Learning Approach for Image Compressive ...,https://ieeexplore.ieee.org/document/8550778/,"In this paper, we propose two versions of a novel deep learning architecture, dubbed as ADMM-CSNet, by combining the traditional model-based CS method and data" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,zhang2020optimization,\cite{zhang2020optimization},Optimization-inspired compact deep compressive sensing,,,True,False,"Zhang, Jian and Zhao, Chen and Gao, Wen",2020.0,,,,IEEE Journal of Selected Topics in Signal Processing,Optimization-inspired compact deep compressive sensing,Optimization-Inspired Compact Deep Compressive Sensing,https://ieeexplore.ieee.org/document/9019857/,"In this paper, we propose a novel framework to design an OPtimization-INspired Explicable deep Network, dubbed OPINE-Net, for adaptive sampling and recovery." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,zhang2020amp,\cite{zhang2020amp},AMP-Net: Denoising-based deep unfolding for compressive image sensing,,,True,False,"Zhang, Zhonghao and Liu, Yipeng and Liu, Jiani and Wen, Fei and Zhu, Ce",2020.0,,,,IEEE Transactions on Image Processing,AMP-Net: Denoising-based deep unfolding for compressive image sensing,Denoising-Based Deep Unfolding for Compressive Image ...,https://ieeexplore.ieee.org/iel7/83/9263394/09298950.pdf,"by Z Zhang · 2020 · Cited by 297 — AMP-Net is a deep unfolding model for compressive image sensing, established by unfolding the denoising process of the approximate message" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,shen2022transcs,\cite{shen2022transcs},TransCS: a transformer-based hybrid architecture for image compressed sensing,,,True,False,"Shen, Minghe and Gan, Hongping and Ning, Chao and Hua, Yi and Zhang, Tao",2022.0,,,,IEEE Transactions on Image Processing,TransCS: a transformer-based hybrid architecture for image compressed sensing,TransCS: A Transformer-Based Hybrid Architecture for ...,https://www.researchgate.net/publication/364935930_TransCS_A_Transformer-based_Hybrid_Architecture_for_Image_Compressed_Sensing,"In this paper, we propose a novel Transformer-based hybrid architecture (dubbed TransCS) to achieve high-quality image CS. In the sampling module, TransCS" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,song2021memory,\cite{song2021memory},Memory-Augmented Deep Unfolding Network for Compressive Sensing,http://arxiv.org/abs/2110.09766v2,"Mapping a truncated optimization method into a deep neural network, deep unfolding network (DUN) has attracted growing attention in compressive sensing (CS) due to its good interpretability and high performance. Each stage in DUNs corresponds to one iteration in optimization. By understanding DUNs from the perspective of the human brain's memory processing, we find there exists two issues in existing DUNs. One is the information between every two adjacent stages, which can be regarded as short-term memory, is usually lost seriously. The other is no explicit mechanism to ensure that the previous stages affect the current stage, which means memory is easily forgotten. To solve these issues, in this paper, a novel DUN with persistent memory for CS is proposed, dubbed Memory-Augmented Deep Unfolding Network (MADUN). We design a memory-augmented proximal mapping module (MAPMM) by combining two types of memory augmentation mechanisms, namely High-throughput Short-term Memory (HSM) and Cross-stage Long-term Memory (CLM). HSM is exploited to allow DUNs to transmit multi-channel short-term memory, which greatly reduces information loss between adjacent stages. CLM is utilized to develop the dependency of deep information across cascading stages, which greatly enhances network representation capability. Extensive CS experiments on natural and MR images show that with the strong ability to maintain and balance information our MADUN outperforms existing state-of-the-art methods by a large margin. The source code is available at https://github.com/jianzhangcs/MADUN/.",True,True,"Song, Jiechong and Chen, Bin and Zhang, Jian",2021.0,,,,,Memory-Augmented Deep Unfolding Network for Compressive Sensing,Memory-Augmented Deep Unfolding Network for Compressive ...,https://dl.acm.org/doi/10.1145/3474085.3475562,Learning memory augmented cascading network for compressed sensing of images. In Proceedings of the European Conference on Computer Vision (ECCV) "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,you2021coast,\cite{you2021coast},COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing,http://arxiv.org/abs/2107.07225v1,"Recent deep network-based compressive sensing (CS) methods have achieved great success. However, most of them regard different sampling matrices as different independent tasks and need to train a specific model for each target sampling matrix. Such practices give rise to inefficiency in computing and suffer from poor generalization ability. In this paper, we propose a novel COntrollable Arbitrary-Sampling neTwork, dubbed COAST, to solve CS problems of arbitrary-sampling matrices (including unseen sampling matrices) with one single model. Under the optimization-inspired deep unfolding framework, our COAST exhibits good interpretability. In COAST, a random projection augmentation (RPA) strategy is proposed to promote the training diversity in the sampling space to enable arbitrary sampling, and a controllable proximal mapping module (CPMM) and a plug-and-play deblocking (PnP-D) strategy are further developed to dynamically modulate the network features and effectively eliminate the blocking artifacts, respectively. Extensive experiments on widely used benchmark datasets demonstrate that our proposed COAST is not only able to handle arbitrary sampling matrices with one single model but also to achieve state-of-the-art performance with fast speed. The source code is available on https://github.com/jianzhangcs/COAST.",True,True,"You, Di and Zhang, Jian and Xie, Jingfen and Chen, Bin and Ma, Siwei",2021.0,,,,IEEE Transactions on Image Processing,COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing,COntrollable Arbitrary-Sampling NeTwork for Compressive ...,https://ieeexplore.ieee.org/iel7/83/9263394/09467810.pdf,"by D You · 2021 · Cited by 150 — In this paper, we pro- pose a novel COntrollable Arbitrary-Sampling neTwork, dubbed. COAST, to solve CS problems of arbitrary-sampling matrices." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,mou2022deep,\cite{mou2022deep},Deep Generalized Unfolding Networks for Image Restoration,http://arxiv.org/abs/2204.13348v1,"Deep neural networks (DNN) have achieved great success in image restoration. However, most DNN methods are designed as a black box, lacking transparency and interpretability. Although some methods are proposed to combine traditional optimization algorithms with DNN, they usually demand pre-defined degradation processes or handcrafted assumptions, making it difficult to deal with complex and real-world applications. In this paper, we propose a Deep Generalized Unfolding Network (DGUNet) for image restoration. Concretely, without loss of interpretability, we integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm, driving it to deal with complex and real-world image degradation. In addition, we design inter-stage information pathways across proximal mapping in different PGD iterations to rectify the intrinsic information loss in most deep unfolding networks (DUN) through a multi-scale and spatial-adaptive way. By integrating the flexible gradient descent and informative proximal mapping, we unfold the iterative PGD algorithm into a trainable DNN. Extensive experiments on various image restoration tasks demonstrate the superiority of our method in terms of state-of-the-art performance, interpretability, and generalizability. The source code is available at https://github.com/MC-E/Deep-Generalized-Unfolding-Networks-for-Image-Restoration.",True,True,"Mou, Chong and Wang, Qian and Zhang, Jian",2022.0,,,,,Deep Generalized Unfolding Networks for Image Restoration,Deep Generalized Unfolding Networks for Image Restoration,http://arxiv.org/pdf/2204.13348v1,"Deep neural networks (DNN) have achieved great success in image restoration. However, most DNN methods are designed as a black box, lacking transparency and interpretability. Although some methods are proposed to combine traditional optimization algorithms with DNN, they usually demand pre-defined degradation processes or handcrafted assumptions, making it difficult to deal with complex and real-world applications. In this paper, we propose a Deep Generalized Unfolding Network (DGUNet) for image restoration. Concretely, without loss of interpretability, we integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm, driving it to deal with complex and real-world image degradation. In addition, we design inter-stage information pathways across proximal mapping in different PGD iterations to rectify the intrinsic information loss in most deep unfolding networks (DUN) through a multi-scale and spatial-adaptive way. By integrating the flexible gradient descent and informative proximal mapping, we unfold the iterative PGD algorithm into a trainable DNN. Extensive experiments on various image restoration tasks demonstrate the superiority of our method in terms of state-of-the-art performance, interpretability, and generalizability. The source code is available at https://github.com/MC-E/Deep-Generalized-Unfolding-Networks-for-Image-Restoration." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,ye2023csformer,\cite{ye2023csformer},CSformer: Bridging Convolution and Transformer for Compressive Sensing,http://arxiv.org/abs/2112.15299v1,"Convolution neural networks (CNNs) have succeeded in compressive image sensing. However, due to the inductive bias of locality and weight sharing, the convolution operations demonstrate the intrinsic limitations in modeling the long-range dependency. Transformer, designed initially as a sequence-to-sequence model, excels at capturing global contexts due to the self-attention-based architectures even though it may be equipped with limited localization abilities. This paper proposes CSformer, a hybrid framework that integrates the advantages of leveraging both detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning. The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery. In the sampling module, images are measured block-by-block by the learned sampling matrix. In the reconstruction stage, the measurement is projected into dual stems. One is the CNN stem for modeling the neighborhood relationships by convolution, and the other is the transformer stem for adopting global self-attention mechanism. The dual branches structure is concurrent, and the local features and global representations are fused under different resolutions to maximize the complementary of features. Furthermore, we explore a progressive strategy and window-based transformer block to reduce the parameter and computational complexity. The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing, which achieves superior performance compared to state-of-the-art methods on different datasets.",True,True,"Ye, Dongjie and Ni, Zhangkai and Wang, Hanli and Zhang, Jian and Wang, Shiqi and Kwong, Sam",2023.0,,,,IEEE Transactions on Image Processing,CSformer: Bridging Convolution and Transformer for Compressive Sensing,CSformer: Bridging Convolution and Transformer for Compressive Sensing,http://arxiv.org/pdf/2112.15299v1,"Convolution neural networks (CNNs) have succeeded in compressive image sensing. However, due to the inductive bias of locality and weight sharing, the convolution operations demonstrate the intrinsic limitations in modeling the long-range dependency. Transformer, designed initially as a sequence-to-sequence model, excels at capturing global contexts due to the self-attention-based architectures even though it may be equipped with limited localization abilities. This paper proposes CSformer, a hybrid framework that integrates the advantages of leveraging both detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning. The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery. In the sampling module, images are measured block-by-block by the learned sampling matrix. In the reconstruction stage, the measurement is projected into dual stems. One is the CNN stem for modeling the neighborhood relationships by convolution, and the other is the transformer stem for adopting global self-attention mechanism. The dual branches structure is concurrent, and the local features and global representations are fused under different resolutions to maximize the complementary of features. Furthermore, we explore a progressive strategy and window-based transformer block to reduce the parameter and computational complexity. The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing, which achieves superior performance compared to state-of-the-art methods on different datasets." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,song2023optimization,\cite{song2023optimization},"Optimization-Inspired Cross-Attention Transformer for Compressive Sensing",http://arxiv.org/abs/2304.13986v1,"By integrating certain optimization solvers with deep neural networks, deep unfolding network (DUN) with good interpretability and high performance has attracted growing attention in compressive sensing (CS). However, existing DUNs often improve the visual quality at the price of a large number of parameters and have the problem of feature information loss during iteration. In this paper, we propose an Optimization-inspired Cross-attention Transformer (OCT) module as an iterative process, leading to a lightweight OCT-based Unfolding Framework (OCTUF) for image CS. Specifically, we design a novel Dual Cross Attention (Dual-CA) sub-module, which consists of an Inertia-Supplied Cross Attention (ISCA) block and a Projection-Guided Cross Attention (PGCA) block. ISCA block introduces multi-channel inertia forces and increases the memory effect by a cross attention mechanism between adjacent iterations. And, PGCA block achieves an enhanced information interaction, which introduces the inertia force into the gradient descent step through a cross attention block. Extensive CS experiments manifest that our OCTUF achieves superior performance compared to state-of-the-art methods while training lower complexity. Codes are available at https://github.com/songjiechong/OCTUF.",True,True,"Song, Jiechong and Mou, Chong and Wang, Shiqi and Ma, Siwei and Zhang, Jian",2023.0,,,,,"Optimization-Inspired Cross-Attention Transformer for Compressive Sensing",Optimization-Inspired Cross-Attention Transformer for ...,https://arxiv.org/abs/2304.13986,"by J Song · 2023 · Cited by 70 — In this paper, we propose an Optimization-inspired Cross-attention Transformer (OCT) module as an iterative process, leading to a lightweight OCT-based" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,wang2023saunet,\cite{wang2023saunet},Saunet: Spatial-attention unfolding network for image compressive sensing,,,True,False,"Wang, Ping and Yuan, Xin",2023.0,,,,,Saunet: Spatial-attention unfolding network for image compressive sensing,"Spatial-Attention Unfolding Network for Image Compressive Sensing"".",https://github.com/pwangcs/SAUNet,"SAUNet has achieved SOTA performance. More importantly, SAUNet contributes to real-world image compressive sensing systems, such as single-pixel cameras." "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,wang2024ufc,\cite{wang2024ufc},UFC-Net: Unrolling Fixed-point Continuous Network for Deep Compressive Sensing,,,True,False,"Wang, Xiaoyang and Gan, Hongping",2024.0,,,,,UFC-Net: Unrolling Fixed-point Continuous Network for Deep Compressive Sensing,[PDF] UFC-Net: Unrolling Fixed-point Continuous Network for Deep ...,https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_UFC-Net_Unrolling_Fixed-point_Continuous_Network_for_Deep_Compressive_Sensing_CVPR_2024_paper.pdf,"In this paper, we propose Unrolling Fixed- point Continuous Network (UFC-Net), a novel deep CS framework motivated by the traditional fixed-point contin- uous" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,guo2024cpp,\cite{guo2024cpp},CPP-Net: Embracing Multi-Scale Feature Fusion into Deep Unfolding CP-PPA Network for Compressive Sensing,,,True,False,"Guo, Zhen and Gan, Hongping",2024.0,,,,,CPP-Net: Embracing Multi-Scale Feature Fusion into Deep Unfolding CP-PPA Network for Compressive Sensing,[PDF] Embracing Multi-Scale Feature Fusion into Deep Unfolding CP-PPA ...,https://openaccess.thecvf.com/content/CVPR2024/papers/Guo_CPP-Net_Embracing_Multi-Scale_Feature_Fusion_into_Deep_Unfolding_CP-PPA_Network_CVPR_2024_paper.pdf,"In this paper, we propose CPP-Net, a novel deep unfolding CS framework, inspired by the primal- dual hybrid strategy of the Chambolle and Pock Proximal. Point" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,qu2024dual,\cite{qu2024dual},Dual-Scale Transformer for Large-Scale Single-Pixel Imaging,http://arxiv.org/abs/2404.05001v1,"Single-pixel imaging (SPI) is a potential computational imaging technique which produces image by solving an illposed reconstruction problem from few measurements captured by a single-pixel detector. Deep learning has achieved impressive success on SPI reconstruction. However, previous poor reconstruction performance and impractical imaging model limit its real-world applications. In this paper, we propose a deep unfolding network with hybrid-attention Transformer on Kronecker SPI model, dubbed HATNet, to improve the imaging quality of real SPI cameras. Specifically, we unfold the computation graph of the iterative shrinkagethresholding algorithm (ISTA) into two alternative modules: efficient tensor gradient descent and hybrid-attention multiscale denoising. By virtue of Kronecker SPI, the gradient descent module can avoid high computational overheads rooted in previous gradient descent modules based on vectorized SPI. The denoising module is an encoder-decoder architecture powered by dual-scale spatial attention for high- and low-frequency aggregation and channel attention for global information recalibration. Moreover, we build a SPI prototype to verify the effectiveness of the proposed method. Extensive experiments on synthetic and real data demonstrate that our method achieves the state-of-the-art performance. The source code and pre-trained models are available at https://github.com/Gang-Qu/HATNet-SPI.",True,True,"Qu, Gang and Wang, Ping and Yuan, Xin",2024.0,,,,,Dual-Scale Transformer for Large-Scale Single-Pixel Imaging,[PDF] Dual-Scale Transformer for Large-Scale Single-Pixel Imaging,https://openaccess.thecvf.com/content/CVPR2024/papers/Qu_Dual-Scale_Transformer_for_Large-Scale_Single-Pixel_Imaging_CVPR_2024_paper.pdf,"In this paper, we propose a deep unfolding network with hybrid-attention. Transformer on Kronecker SPI model, dubbed HATNet, to im- prove the imaging quality of" "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,yuan2016generalized,\cite{yuan2016generalized},"Generalized Alternating Projection Based Total Variation Minimization for Compressive Sensing",http://arxiv.org/abs/1511.03890v1,"We consider the total variation (TV) minimization problem used for compressive sensing and solve it using the generalized alternating projection (GAP) algorithm. Extensive results demonstrate the high performance of proposed algorithm on compressive sensing, including two dimensional images, hyperspectral images and videos. We further derive the Alternating Direction Method of Multipliers (ADMM) framework with TV minimization for video and hyperspectral image compressive sensing under the CACTI and CASSI framework, respectively. Connections between GAP and ADMM are also provided.",True,True,"Yuan, Xin",2016.0,,,,,"Generalized Alternating Projection Based Total Variation Minimization for Compressive Sensing",Generalized alternating projection based total variation minimization ...,https://ieeexplore.ieee.org/document/7532817/,We consider the total variation (TV) minimization problem used for compressive sensing and solve it using the generalized alternating projection (GAP) "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,geman1995nonlinear,\cite{geman1995nonlinear},Nonlinear image recovery with half-quadratic regularization,,,True,False,"Geman, Donald and Yang, Chengda",1995.0,,,,IEEE transactions on Image Processing,Nonlinear image recovery with half-quadratic regularization,Nonlinear image recovery with half-quadratic regularization,https://www.semanticscholar.org/paper/Nonlinear-image-recovery-with-half-quadratic-Geman-Yang/1c99baa92387ead70c668dde6a6ed73b20697a6f,This approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary "Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging",2505.23180v1,romano2017little,\cite{romano2017little},The Little Engine that Could: Regularization by Denoising (RED),http://arxiv.org/abs/1611.02862v3,"Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms lead some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior ($P^3$) method, showing that any inverse problem can be handled by sequentially applying image denoising steps. This relies heavily on the ADMM optimization technique in order to obtain this chained denoising interpretation. Is this the only way in which tasks in image processing can exploit the image denoising engine? In this paper we provide an alternative, more powerful and more flexible framework for achieving the same goal. As opposed to the $P^3$ method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem. We propose an explicit image-adaptive Laplacian-based regularization functional, making the overall objective functional clearer and better defined. With a complete flexibility to choose the iterative optimization procedure for minimizing the above functional, RED is capable of incorporating any image denoising algorithm, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. We test this approach and demonstrate state-of-the-art results in the image deblurring and super-resolution problems.",True,True,"Romano, Yaniv and Elad, Michael and Milanfar, Peyman",2017.0,,,,SIAM Journal on Imaging Sciences,The Little Engine that Could: Regularization by Denoising (RED),The Little Engine that Could: Regularization by Denoising (RED),http://arxiv.org/pdf/1611.02862v3,"Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms lead some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior ($P^3$) method, showing that any inverse problem can be handled by sequentially applying image denoising steps. This relies heavily on the ADMM optimization technique in order to obtain this chained denoising interpretation. Is this the only way in which tasks in image processing can exploit the image denoising engine? In this paper we provide an alternative, more powerful and more flexible framework for achieving the same goal. As opposed to the $P^3$ method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem. We propose an explicit image-adaptive Laplacian-based regularization functional, making the overall objective functional clearer and better defined. With a complete flexibility to choose the iterative optimization procedure for minimizing the above functional, RED is capable of incorporating any image denoising algorithm, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. We test this approach and demonstrate state-of-the-art results in the image deblurring and super-resolution problems." "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,choi2007motion,\cite{choi2007motion},Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation,,,True,False,"Choi, Byeong-Doo and Han, Jong-Woo and Kim, Chang-Su and Ko, Sung-Jea",2007.0,,,,IEEE Transactions on Circuits and Systems for Video Technology,Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation,Motion-compensated frame interpolation using bilateral ...,https://pure.korea.ac.kr/en/publications/motion-compensated-frame-interpolation-using-bilateral-motion-est/fingerprints/,Dive into the research topics of 'Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation'. "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,parihar2022comprehensive,\cite{parihar2022comprehensive},AceVFI: A Comprehensive Survey of Advances in Video Frame Interpolation,http://arxiv.org/abs/2506.01061v1,"Video Frame Interpolation (VFI) is a fundamental Low-Level Vision (LLV) task that synthesizes intermediate frames between existing ones while maintaining spatial and temporal coherence. VFI techniques have evolved from classical motion compensation-based approach to deep learning-based approach, including kernel-, flow-, hybrid-, phase-, GAN-, Transformer-, Mamba-, and more recently diffusion model-based approach. We introduce AceVFI, the most comprehensive survey on VFI to date, covering over 250+ papers across these approaches. We systematically organize and describe VFI methodologies, detailing the core principles, design assumptions, and technical characteristics of each approach. We categorize the learning paradigm of VFI methods namely, Center-Time Frame Interpolation (CTFI) and Arbitrary-Time Frame Interpolation (ATFI). We analyze key challenges of VFI such as large motion, occlusion, lighting variation, and non-linear motion. In addition, we review standard datasets, loss functions, evaluation metrics. We examine applications of VFI including event-based, cartoon, medical image VFI and joint VFI with other LLV tasks. We conclude by outlining promising future research directions to support continued progress in the field. This survey aims to serve as a unified reference for both newcomers and experts seeking a deep understanding of modern VFI landscapes.",True,True,"Parihar, Anil Singh and Varshney, Disha and Pandya, Kshitija and Aggarwal, Ashray",2022.0,,,,The Visual Computer,AceVFI: A Comprehensive Survey of Advances in Video Frame Interpolation,AceVFI: A Comprehensive Survey of Advances in Video Frame Interpolation,http://arxiv.org/pdf/2506.01061v1,"Video Frame Interpolation (VFI) is a fundamental Low-Level Vision (LLV) task that synthesizes intermediate frames between existing ones while maintaining spatial and temporal coherence. VFI techniques have evolved from classical motion compensation-based approach to deep learning-based approach, including kernel-, flow-, hybrid-, phase-, GAN-, Transformer-, Mamba-, and more recently diffusion model-based approach. We introduce AceVFI, the most comprehensive survey on VFI to date, covering over 250+ papers across these approaches. We systematically organize and describe VFI methodologies, detailing the core principles, design assumptions, and technical characteristics of each approach. We categorize the learning paradigm of VFI methods namely, Center-Time Frame Interpolation (CTFI) and Arbitrary-Time Frame Interpolation (ATFI). We analyze key challenges of VFI such as large motion, occlusion, lighting variation, and non-linear motion. In addition, we review standard datasets, loss functions, evaluation metrics. We examine applications of VFI including event-based, cartoon, medical image VFI and joint VFI with other LLV tasks. We conclude by outlining promising future research directions to support continued progress in the field. This survey aims to serve as a unified reference for both newcomers and experts seeking a deep understanding of modern VFI landscapes." "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,DAIN,\cite{DAIN},Depth-Aware Video Frame Interpolation,http://arxiv.org/abs/1904.00830v1,"Video frame interpolation aims to synthesize nonexistent frames in-between the original frames. While significant advances have been made from the recent deep convolutional neural networks, the quality of interpolation is often reduced due to large object motion or occlusion. In this work, we propose a video frame interpolation method which explicitly detects the occlusion by exploring the depth information. Specifically, we develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. In addition, we learn hierarchical features to gather contextual information from neighboring pixels. The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame. Our model is compact, efficient, and fully differentiable. Quantitative and qualitative results demonstrate that the proposed model performs favorably against state-of-the-art frame interpolation methods on a wide variety of datasets.",True,True,"Bao, Wenbo and Lai, Wei-Sheng and Ma, Chao and Zhang, Xiaoyun and Gao, Zhiyong and Yang, Ming-Hsuan",2019.0,,,,,Depth-Aware Video Frame Interpolation,[PDF] Depth-Aware Video Frame Interpolation - CVF Open Access,https://openaccess.thecvf.com/content_CVPR_2019/papers/Bao_Depth-Aware_Video_Frame_Interpolation_CVPR_2019_paper.pdf,Video frame interpolation aims to synthesize non- existent frames in-between the original frames. While sig- nificant advances have been made from the "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,RIFE,\cite{RIFE},Real-Time Intermediate Flow Estimation for Video Frame Interpolation,http://arxiv.org/abs/2011.06294v12,"Real-time video frame interpolation (VFI) is very useful in video processing, media players, and display devices. We propose RIFE, a Real-time Intermediate Flow Estimation algorithm for VFI. To realize a high-quality flow-based VFI method, RIFE uses a neural network named IFNet that can estimate the intermediate flows end-to-end with much faster speed. A privileged distillation scheme is designed for stable IFNet training and improve the overall performance. RIFE does not rely on pre-trained optical flow models and can support arbitrary-timestep frame interpolation with the temporal encoding input. Experiments demonstrate that RIFE achieves state-of-the-art performance on several public benchmarks. Compared with the popular SuperSlomo and DAIN methods, RIFE is 4--27 times faster and produces better results. Furthermore, RIFE can be extended to wider applications thanks to temporal encoding. The code is available at https://github.com/megvii-research/ECCV2022-RIFE.",True,True,"Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang",2022.0,,,,,Real-Time Intermediate Flow Estimation for Video Frame Interpolation,Real-Time Intermediate Flow Estimation for Video Frame ...,https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136740608.pdf,Video Frame Interpolation (VFI) aims to synthesize intermediate frames between two consecutive video frames. VFI supports various applications like slow-motion. "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,m2m,\cite{m2m},Many-to-many Splatting for Efficient Video Frame Interpolation,http://arxiv.org/abs/2204.03513v1,"Motion-based video frame interpolation commonly relies on optical flow to warp pixels from the inputs to the desired interpolation instant. Yet due to the inherent challenges of motion estimation (e.g. occlusions and discontinuities), most state-of-the-art interpolation approaches require subsequent refinement of the warped result to generate satisfying outputs, which drastically decreases the efficiency for multi-frame interpolation. In this work, we propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently. Specifically, given a frame pair, we estimate multiple bidirectional flows to directly forward warp the pixels to the desired time step, and then fuse any overlapping pixels. In doing so, each source pixel renders multiple target pixels and each target pixel can be synthesized from a larger area of visual context. This establishes a many-to-many splatting scheme with robustness to artifacts like holes. Moreover, for each input frame pair, M2M only performs motion estimation once and has a minuscule computational overhead when interpolating an arbitrary number of in-between frames, hence achieving fast multi-frame interpolation. We conducted extensive experiments to analyze M2M, and found that it significantly improves efficiency while maintaining high effectiveness.",True,True,"Hu, Ping and Niklaus, Simon and Sclaroff, Stan and Saenko, Kate",2022.0,,,,,Many-to-many Splatting for Efficient Video Frame Interpolation,Many-to-many Splatting for Efficient Video Frame Interpolation,https://ieeexplore.ieee.org/iel7/9878378/9878366/09878793.pdf,"In this work, we propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently. Specifically, given a frame pair, we" "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,EMA,\cite{EMA},"Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation",http://arxiv.org/abs/2303.00440v2,"Effectively extracting inter-frame motion and appearance information is important for video frame interpolation (VFI). Previous works either extract both types of information in a mixed way or elaborate separate modules for each type of information, which lead to representation ambiguity and low efficiency. In this paper, we propose a novel module to explicitly extract motion and appearance information via a unifying operation. Specifically, we rethink the information process in inter-frame attention and reuse its attention map for both appearance feature enhancement and motion information extraction. Furthermore, for efficient VFI, our proposed module could be seamlessly integrated into a hybrid CNN and Transformer architecture. This hybrid pipeline can alleviate the computational complexity of inter-frame attention as well as preserve detailed low-level structure information. Experimental results demonstrate that, for both fixed- and arbitrary-timestep interpolation, our method achieves state-of-the-art performance on various datasets. Meanwhile, our approach enjoys a lighter computation overhead over models with close performance. The source code and models are available at https://github.com/MCG-NJU/EMA-VFI.",True,True,"Zhang, Guozhen and Zhu, Yuhan and Wang, Haonan and Chen, Youxin and Wu, Gangshan and Wang, Limin",2023.0,,,,,"Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation",Extracting Motion and Appearance via Inter-Frame Attention ...,https://openaccess.thecvf.com/content/CVPR2023/papers/Zhang_Extracting_Motion_and_Appearance_via_Inter-Frame_Attention_for_Efficient_Video_CVPR_2023_paper.pdf,by G Zhang · 2023 · Cited by 157 — We propose to utilize inter-frame attention to extract both motion and appearance information simultane- ously for video frame interpolation. • An hybrid CNN "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,unisim,\cite{unisim},UniSim: A Neural Closed-Loop Sensor Simulator,http://arxiv.org/abs/2308.01898v1,"Rigorously testing autonomy systems is essential for making safe self-driving vehicles (SDV) a reality. It requires one to generate safety critical scenarios beyond what can be collected safely in the world, as many scenarios happen rarely on public roads. To accurately evaluate performance, we need to test the SDV on these scenarios in closed-loop, where the SDV and other actors interact with each other at each timestep. Previously recorded driving logs provide a rich resource to build these new scenarios from, but for closed loop evaluation, we need to modify the sensor data based on the new scene configuration and the SDV's decisions, as actors might be added or removed and the trajectories of existing actors and the SDV will differ from the original log. In this paper, we present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle and converts it into a realistic closed-loop multi-sensor simulation. UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene, and composites them together to simulate LiDAR and camera data at new viewpoints, with actors added or removed and at new placements. To better handle extrapolated views, we incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions. Our experiments show UniSim can simulate realistic sensor data with small domain gap on downstream tasks. With UniSim, we demonstrate closed-loop evaluation of an autonomy system on safety-critical scenarios as if it were in the real world.",True,True,"Yang, Ze and Chen, Yun and Wang, Jingkang and Manivasagam, Sivabalan and Ma, Wei-Chiu and Yang, Anqi Joyce and Urtasun, Raquel",2023.0,,,,,UniSim: A Neural Closed-Loop Sensor Simulator,[2308.01898] UniSim: A Neural Closed-Loop Sensor Simulator - arXiv,https://arxiv.org/abs/2308.01898,A neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle and converts it into a realistic closed-loop multi-sensor "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,neurad,\cite{neurad},NeuRAD: Neural Rendering for Autonomous Driving,http://arxiv.org/abs/2311.15260v3,"Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community. Recent methods show NeRFs' potential for closed-loop simulation, enabling testing of AD systems, and as an advanced training data augmentation technique. However, existing methods often require long training times, dense semantic supervision, or lack generalizability. This, in turn, hinders the application of NeRFs for AD at scale. In this paper, we propose NeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our method features simple network design, extensive sensor modeling for both camera and lidar -- including rolling shutter, beam divergence and ray dropping -- and is applicable to multiple datasets out of the box. We verify its performance on five popular AD datasets, achieving state-of-the-art performance across the board. To encourage further development, we will openly release the NeuRAD source code. See https://github.com/georghess/NeuRAD .",True,True,"Tonderski, Adam and Lindstr{\""o}m, Carl and Hess, Georg and Ljungbergh, William and Svensson, Lennart and Petersson, Christoffer",2024.0,,,,,NeuRAD: Neural Rendering for Autonomous Driving,NeuRAD: Neural Rendering for Autonomous Driving,http://arxiv.org/pdf/2311.15260v3,"Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community. Recent methods show NeRFs' potential for closed-loop simulation, enabling testing of AD systems, and as an advanced training data augmentation technique. However, existing methods often require long training times, dense semantic supervision, or lack generalizability. This, in turn, hinders the application of NeRFs for AD at scale. In this paper, we propose NeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our method features simple network design, extensive sensor modeling for both camera and lidar -- including rolling shutter, beam divergence and ray dropping -- and is applicable to multiple datasets out of the box. We verify its performance on five popular AD datasets, achieving state-of-the-art performance across the board. To encourage further development, we will openly release the NeuRAD source code. See https://github.com/georghess/NeuRAD ." "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,cao2024lightning,\cite{cao2024lightning},"Lightning NeRF: Efficient Hybrid Scene Representation for Autonomous Driving",http://arxiv.org/abs/2403.05907v1,"Recent studies have highlighted the promising application of NeRF in autonomous driving contexts. However, the complexity of outdoor environments, combined with the restricted viewpoints in driving scenarios, complicates the task of precisely reconstructing scene geometry. Such challenges often lead to diminished quality in reconstructions and extended durations for both training and rendering. To tackle these challenges, we present Lightning NeRF. It uses an efficient hybrid scene representation that effectively utilizes the geometry prior from LiDAR in autonomous driving scenarios. Lightning NeRF significantly improves the novel view synthesis performance of NeRF and reduces computational overheads. Through evaluations on real-world datasets, such as KITTI-360, Argoverse2, and our private dataset, we demonstrate that our approach not only exceeds the current state-of-the-art in novel view synthesis quality but also achieves a five-fold increase in training speed and a ten-fold improvement in rendering speed. Codes are available at https://github.com/VISION-SJTU/Lightning-NeRF .",True,True,"Cao, Junyi and Li, Zhichao and Wang, Naiyan and Ma, Chao",2024.0,,,,arXiv preprint arXiv:2403.05907,"Lightning NeRF: Efficient Hybrid Scene Representation for Autonomous Driving",Efficient Hybrid Scene Representation for Autonomous Driving - arXiv,https://arxiv.org/abs/2403.05907,We present Lightning NeRF. It uses an efficient hybrid scene representation that effectively utilizes the geometry prior from LiDAR in autonomous driving "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,jiang2023alignerf,\cite{jiang2023alignerf},"AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training",http://arxiv.org/abs/2211.09682v1,"Neural Radiance Fields (NeRFs) are a powerful representation for modeling a 3D scene as a continuous function. Though NeRF is able to render complex 3D scenes with view-dependent effects, few efforts have been devoted to exploring its limits in a high-resolution setting. Specifically, existing NeRF-based methods face several limitations when reconstructing high-resolution real scenes, including a very large number of parameters, misaligned input data, and overly smooth details. In this work, we conduct the first pilot study on training NeRF with high-resolution data and propose the corresponding solutions: 1) marrying the multilayer perceptron (MLP) with convolutional layers which can encode more neighborhood information while reducing the total number of parameters; 2) a novel training strategy to address misalignment caused by moving objects or small camera calibration errors; and 3) a high-frequency aware loss. Our approach is nearly free without introducing obvious training/testing costs, while experiments on different datasets demonstrate that it can recover more high-frequency details compared with the current state-of-the-art NeRF models. Project page: \url{https://yifanjiang.net/alignerf.}",True,True,"Jiang, Yifan and Hedman, Peter and Mildenhall, Ben and Xu, Dejia and Barron, Jonathan T and Wang, Zhangyang and Xue, Tianfan",2023.0,,,,,"AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training",[PDF] High-Fidelity Neural Radiance Fields via Alignment-Aware Training,https://openaccess.thecvf.com/content/CVPR2023/papers/Jiang_AligNeRF_High-Fidelity_Neural_Radiance_Fields_via_Alignment-Aware_Training_CVPR_2023_paper.pdf,"AligNeRF uses staged training: starting with an initial “normal” pre-training stage, followed by an alignment-aware fine-tuning stage. We choose mip-NeRF. 360" "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,wynn2023diffusionerf,\cite{wynn2023diffusionerf},"DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models",http://arxiv.org/abs/2302.12231v3,"Under good conditions, Neural Radiance Fields (NeRFs) have shown impressive results on novel view synthesis tasks. NeRFs learn a scene's color and density fields by minimizing the photometric discrepancy between training views and differentiable renderings of the scene. Once trained from a sufficient set of views, NeRFs can generate novel views from arbitrary camera positions. However, the scene geometry and color fields are severely under-constrained, which can lead to artifacts, especially when trained with few input views. To alleviate this problem we learn a prior over scene geometry and color, using a denoising diffusion model (DDM). Our DDM is trained on RGBD patches of the synthetic Hypersim dataset and can be used to predict the gradient of the logarithm of a joint probability distribution of color and depth patches. We show that, these gradients of logarithms of RGBD patch priors serve to regularize geometry and color of a scene. During NeRF training, random RGBD patches are rendered and the estimated gradient of the log-likelihood is backpropagated to the color and density fields. Evaluations on LLFF, the most relevant dataset, show that our learned prior achieves improved quality in the reconstructed geometry and improved generalization to novel views. Evaluations on DTU show improved reconstruction quality among NeRF methods.",True,True,"Wynn, Jamie and Turmukhambetov, Daniyar",2023.0,,,,,"DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models",Regularizing Neural Radiance Fields with Denoising Diffusion Models,https://arxiv.org/abs/2302.12231,NeRFs learn a scene's color and density fields by minimizing the photometric discrepancy between training views and differentiable renderings of the scene. "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,3dgsEh,\cite{3dgsEh},3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors,,,True,False,"Liu, Xi and Zhou, Chaoyi and Huang, Siyu",2024.0,,,,arXiv preprint arXiv:2410.16266,3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors,Enhancing Unbounded 3D Gaussian Splatting with View- ...,https://arxiv.org/abs/2410.16266,"Image 4: arxiv logo>cs> arXiv:2410.16266 **arXiv:2410.16266** (cs) View a PDF of the paper titled 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors, by Xi Liu and 2 other authors View a PDF of the paper titled 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors, by Xi Liu and 2 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization",2505.22616v1,yu2024viewcrafter,\cite{yu2024viewcrafter},"ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis",http://arxiv.org/abs/2409.02048v1,"Despite recent advancements in neural 3D reconstruction, the dependence on dense multi-view captures restricts their broader applicability. In this work, we propose \textbf{ViewCrafter}, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images with the prior of video diffusion model. Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames with precise camera pose control. To further enlarge the generation range of novel views, we tailored an iterative view synthesis strategy together with a camera trajectory planning algorithm to progressively extend the 3D clues and the areas covered by the novel views. With ViewCrafter, we can facilitate various applications, such as immersive experiences with real-time rendering by efficiently optimizing a 3D-GS representation using the reconstructed 3D points and the generated novel views, and scene-level text-to-3D generation for more imaginative content creation. Extensive experiments on diverse datasets demonstrate the strong generalization capability and superior performance of our method in synthesizing high-fidelity and consistent novel views.",True,True,"Yu, Wangbo and Xing, Jinbo and Yuan, Li and Hu, Wenbo and Li, Xiaoyu and Huang, Zhipeng and Gao, Xiangjun and Wong, Tien-Tsin and Shan, Ying and Tian, Yonghong",2024.0,,,,arXiv preprint arXiv:2409.02048,"ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis",Taming Video Diffusion Models for High-fidelity Novel View ...,https://github.com/Drexubery/ViewCrafter,"ViewCrafter can generate high-fidelity novel views from a single or sparse reference image, while also supporting highly precise pose control." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,long2015fully,\cite{long2015fully},Fully Convolutional Networks for Semantic Segmentation,http://arxiv.org/abs/1411.4038v2,"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build ""fully convolutional"" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",True,True,"Long, Jonathan and Shelhamer, Evan and Darrell, Trevor",2015.0,,,,,Fully Convolutional Networks for Semantic Segmentation,Fully Convolutional Networks for Semantic Segmentation,http://arxiv.org/pdf/1411.4038v2,"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build ""fully convolutional"" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,chen2017deeplab,\cite{chen2017deeplab},"DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs",http://arxiv.org/abs/1606.00915v2,"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed ""DeepLab"" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",True,True,"Chen, Liang-Chieh and Papandreou, George and Kokkinos, Iasonas and Murphy, Kevin and Yuille, Alan L",2017.0,,,,IEEE transactions on pattern analysis and machine intelligence,"DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs",[PDF] DeepLab: Semantic Image Segmentation with Deep Convolutional ...,http://arxiv.org/pdf/1606.00915,"A deep convolutional neural network (VGG-16 [4] or ResNet-101 [11] in this work) trained in the task of image classification is re-purposed to the task of semantic segmentation by (1) transforming all the fully connected layers to convolutional layers ( i.e ., fully convo-lutional network [14]) and (2) increasing feature resolution through atrous convolutional layers, allowing us to compute feature responses every 8 pixels instead of every 32 pixels in the original network. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” in ICLR , 2015. L. Yuille, “Weakly- and semi-supervised learning of a dcnn for semantic image segmentation,” in ICCV , 2015. van den Hengel, “High-performance semantic segmentation using very deep fully convolutional net-works,” arXiv:1604.04339 , 2016." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,liu2015parsenet,\cite{liu2015parsenet},ParseNet: Looking Wider to See Better,http://arxiv.org/abs/1506.04579v2,"We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at https://github.com/weiliu89/caffe/tree/fcn .",True,True,"Liu, Wei and Rabinovich, Andrew and Berg, Alexander C",2015.0,,,,arXiv preprint arXiv:1506.04579,ParseNet: Looking Wider to See Better,ParseNet: Looking Wider to See Better,http://arxiv.org/pdf/1506.04579v2,"We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at https://github.com/weiliu89/caffe/tree/fcn ." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,zhao2017pyramid,\cite{zhao2017pyramid},Pyramid Scene Parsing Network,http://arxiv.org/abs/1612.01105v2,"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",True,True,"Zhao, Hengshuang and Shi, Jianping and Qi, Xiaojuan and Wang, Xiaogang and Jia, Jiaya",2017.0,,,,,Pyramid Scene Parsing Network,Pyramid Scene Parsing Network,http://arxiv.org/pdf/1612.01105v2,"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,zhao2018psanet,\cite{zhao2018psanet},Psanet: Point-wise spatial attention network for scene parsing,,,True,False,"Zhao, Hengshuang and Zhang, Yi and Liu, Shu and Shi, Jianping and Loy, Chen Change and Lin, Dahua and Jia, Jiaya",2018.0,,,,,Psanet: Point-wise spatial attention network for scene parsing,[PDF] PSANet: Point-wise Spatial Attention Network for Scene Parsing,https://hszhao.github.io/paper/eccv18_psanet.pdf,"In this paper, we propose the point-wise spatial attention network (PSANet) to aggregate long-range contextual information in a flexible and adaptive man- ner." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,zhu2019asymmetric,\cite{zhu2019asymmetric},Asymmetric Non-local Neural Networks for Semantic Segmentation,http://arxiv.org/abs/1908.07678v5,"The non-local module works as a particularly useful technique for semantic segmentation while criticized for its prohibitive computation and GPU memory occupation. In this paper, we present Asymmetric Non-local Neural Network to semantic segmentation, which has two prominent components: Asymmetric Pyramid Non-local Block (APNB) and Asymmetric Fusion Non-local Block (AFNB). APNB leverages a pyramid sampling module into the non-local block to largely reduce the computation and memory consumption without sacrificing the performance. AFNB is adapted from APNB to fuse the features of different levels under a sufficient consideration of long range dependencies and thus considerably improves the performance. Extensive experiments on semantic segmentation benchmarks demonstrate the effectiveness and efficiency of our work. In particular, we report the state-of-the-art performance of 81.3 mIoU on the Cityscapes test set. For a 256x128 input, APNB is around 6 times faster than a non-local block on GPU while 28 times smaller in GPU running memory occupation. Code is available at: https://github.com/MendelXu/ANN.git.",True,True,"Zhu, Zhen and Xu, Mengde and Bai, Song and Huang, Tengteng and Bai, Xiang",2019.0,,,,,Asymmetric Non-local Neural Networks for Semantic Segmentation,Asymmetric Non-Local Neural Networks for Semantic ...,https://openaccess.thecvf.com/content_ICCV_2019/papers/Zhu_Asymmetric_Non-Local_Neural_Networks_for_Semantic_Segmentation_ICCV_2019_paper.pdf,"In this paper, we present Asymmetric Non-local Neural Network to semantic segmentation, which has two promi-nent components: Asymmetric Pyramid Non-local Block (APNB) and Asymmetric Fusion Non-local Block (AFNB). Motivated by the spatial pyramid pooling [12, 16, 46] strategy, we propose to embed a pyramid sampling module into non-local blocks, which could largely reduce the computation overhead of matrix multiplications yet provide substantial semantic fea-ture statistics. Different from these works, our network uniquely incor-porates pyramid sampling strategies with non-local blocks to capture the semantic statistics of different scales with only a minor budget of computation, while maintaining the excellent performance as the original non-local modules." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,xie2021segformer,\cite{xie2021segformer},"SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers",http://arxiv.org/abs/2105.15203v3,"We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code will be released at: github.com/NVlabs/SegFormer.",True,True,"Xie, Enze and Wang, Wenhai and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Luo, Ping",2021.0,,,,Advances in Neural Information Processing Systems,"SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers",[PDF] SegFormer: Simple and Efficient Design for Semantic Segmentation ...,https://proceedings.neurips.cc/paper/2021/file/64f1f27bf1b4ec22924fd0acb550c235-Paper.pdf,"We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,zheng2021rethinking,\cite{zheng2021rethinking},"Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers",http://arxiv.org/abs/2012.15840v3,"Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoder-decoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (ie, without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first position in the highly competitive ADE20K test server leaderboard on the day of submission.",True,True,"Zheng, Sixiao and Lu, Jiachen and Zhao, Hengshuang and Zhu, Xiatian and Luo, Zekun and Wang, Yabiao and Fu, Yanwei and Feng, Jianfeng and Xiang, Tao and Torr, Philip HS and others",2021.0,,,,,"Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers",[PDF] Rethinking Semantic Segmentation From a Sequence-to-Sequence ...,https://openaccess.thecvf.com/content/CVPR2021/papers/Zheng_Rethinking_Semantic_Segmentation_From_a_Sequence-to-Sequence_Perspective_With_Transformers_CVPR_2021_paper.pdf,"In this paper, we aim to provide an alternative perspective by treating semantic segmenta- tion as a sequence-to-sequence prediction task. Specifically, we" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,tsai2018learning,\cite{tsai2018learning},Learning to Adapt Structured Output Space for Semantic Segmentation,http://arxiv.org/abs/1802.10349v3,"Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.",True,True,"Tsai, Yi-Hsuan and Hung, Wei-Chih and Schulter, Samuel and Sohn, Kihyuk and Yang, Ming-Hsuan and Chandraker, Manmohan",2018.0,,,,,Learning to Adapt Structured Output Space for Semantic Segmentation,Learning to Adapt Structured Output Space for Semantic Segmentation,http://arxiv.org/pdf/1802.10349v3,"Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,hong2018conditional,\cite{hong2018conditional},Conditional generative adversarial network for structured domain adaptation,,,True,False,"Hong, Weixiang and Wang, Zhenzhen and Yang, Ming and Yuan, Junsong",2018.0,,,,,Conditional generative adversarial network for structured domain adaptation,Conditional Generative Adversarial Network for Structured Domain ...,https://weixianghong.github.io/publications/2018-10-04-CVPR/,"Conditional Generative Adversarial Network for Structured Domain Adaptation. Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR)," Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,kim2020learning,\cite{kim2020learning},"Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation",http://arxiv.org/abs/2003.00867v2,"Since annotating pixel-level labels for semantic segmentation is laborious, leveraging synthetic data is an attractive solution. However, due to the domain gap between synthetic domain and real domain, it is challenging for a model trained with synthetic data to generalize to real data. In this paper, considering the fundamental difference between the two domains as the texture, we propose a method to adapt to the texture of the target domain. First, we diversity the texture of synthetic images using a style transfer algorithm. The various textures of generated images prevent a segmentation model from overfitting to one specific (synthetic) texture. Then, we fine-tune the model with self-training to get direct supervision of the target texture. Our results achieve state-of-the-art performance and we analyze the properties of the model trained on the stylized dataset with extensive experiments.",True,True,"Kim, Myeongjin and Byun, Hyeran",2020.0,,,,,"Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation",Learning Texture Invariant Representation for Domain ...,https://openaccess.thecvf.com/content_CVPR_2020/papers/Kim_Learning_Texture_Invariant_Representation_for_Domain_Adaptation_of_Semantic_Segmentation_CVPR_2020_paper.pdf,"by M Kim · 2020 · Cited by 351 — We design a method to adapt to the target domain's tex- ture for domain adaptation of semantic segmentation, combining pixel-level method and self-training. 2." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,pan2020unsupervised,\cite{pan2020unsupervised},"Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision",http://arxiv.org/abs/2004.07703v4,"Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation. However, these approaches heavily rely on annotated data which are labor intensive. To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models. However, the models trained from synthetic data are difficult to transfer to real images. To tackle this issue, previous works have considered directly adapting models from the source data to the unlabeled target data (to reduce the inter-domain gap). Nonetheless, these techniques do not consider the large distribution gap among the target data itself (intra-domain gap). In this work, we propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together. First, we conduct the inter-domain adaptation of the model; from this adaptation, we separate the target domain into an easy and hard split using an entropy-based ranking function. Finally, to decrease the intra-domain gap, we propose to employ a self-supervised adaptation technique from the easy to the hard split. Experimental results on numerous benchmark datasets highlight the effectiveness of our method against existing state-of-the-art approaches. The source code is available at https://github.com/feipan664/IntraDA.git.",True,True,"Pan, Fei and Shin, Inkyu and Rameau, Francois and Lee, Seokju and Kweon, In So",2020.0,,,,,"Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision",[PDF] Unsupervised Intra-Domain Adaptation for Semantic Segmentation ...,https://openaccess.thecvf.com/content_CVPR_2020/papers/Pan_Unsupervised_Intra-Domain_Adaptation_for_Semantic_Segmentation_Through_Self-Supervision_CVPR_2020_paper.pdf,"In this work, we propose a two-step self- supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together. First, we con- duct" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,tsai2019domain,\cite{tsai2019domain},"Domain Adaptation for Structured Output via Discriminative Patch Representations",http://arxiv.org/abs/1901.05427v4,"Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn supervised models like convolutional neural networks. However, models trained on one data domain may not generalize well to other domains without annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. We propose to learn discriminative feature representations of patches in the source domain by discovering multiple modes of patch-wise output distribution through the construction of a clustered space. With such representations as guidance, we use an adversarial learning scheme to push the feature representations of target patches in the clustered space closer to the distributions of source patches. In addition, we show that our framework is complementary to existing domain adaptation techniques and achieves consistent improvements on semantic segmentation. Extensive ablations and results are demonstrated on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.",True,True,"Tsai, Yi-Hsuan and Sohn, Kihyuk and Schulter, Samuel and Chandraker, Manmohan",2019.0,,,,,"Domain Adaptation for Structured Output via Discriminative Patch Representations",Domain Adaptation for Structured Output via Discriminative ...,https://www.computer.org/csdl/proceedings-article/iccv/2019/480300b456/1hVlpOKL1FC,by YH Tsai · 2019 · Cited by 417 — We propose to learn discriminative feature representations of patches in the source domain by discovering multiple modes of patch-wise output distribution ...See more Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,chen2019synergistic,\cite{chen2019synergistic},"Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation",http://arxiv.org/abs/1901.08211v4,"This paper presents a novel unsupervised domain adaptation framework, called Synergistic Image and Feature Adaptation (SIFA), to effectively tackle the problem of domain shift. Domain adaptation has become an important and hot topic in recent studies on deep learning, aiming to recover performance degradation when applying the neural networks to new testing domains. Our proposed SIFA is an elegant learning diagram which presents synergistic fusion of adaptations from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features towards the segmentation task. The feature encoder layers are shared by both perspectives to grasp their mutual benefits during the end-to-end learning procedure. Without using any annotation from the target domain, the learning of our unified model is guided by adversarial losses, with multiple discriminators employed from various aspects. We have extensively validated our method with a challenging application of cross-modality medical image segmentation of cardiac structures. Experimental results demonstrate that our SIFA model recovers the degraded performance from 17.2% to 73.0%, and outperforms the state-of-the-art methods by a significant margin.",True,True,"Chen, Cheng and Dou, Qi and Chen, Hao and Qin, Jing and Heng, Pheng-Ann",2019.0,,,,,"Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation",Synergistic Image and Feature Adaptation: Towards Cross-Modality ...,https://aaai.org/papers/00865-synergistic-image-and-feature-adaptation-towards-cross-modality-domain-adaptation-for-medical-image-segmentation/,"This paper presents a novel unsupervised domain adaptation framework, called Synergistic Image and Feature Adaptation (SIFA), to effectively" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,du2019ssf,\cite{du2019ssf},Ssf-dan: Separated semantic feature based domain adaptation network for semantic segmentation,,,True,False,"Du, Liang and Tan, Jingang and Yang, Hongye and Feng, Jianfeng and Xue, Xiangyang and Zheng, Qibao and Ye, Xiaoqing and Zhang, Xiaolin",2019.0,,,,,Ssf-dan: Separated semantic feature based domain adaptation network for semantic segmentation,ICCV 2019 Open Access Repository,https://openaccess.thecvf.com/content_ICCV_2019/html/Du_SSF-DAN_Separated_Semantic_Feature_Based_Domain_Adaptation_Network_for_Semantic_ICCV_2019_paper.html,"by L Du · 2019 · Cited by 213 — In this work, we propose a Separated Semantic Feature based domain adaptation network, named SSF-DAN, for semantic segmentation. First, a Semantic-wise" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,melas2021pixmatch,\cite{melas2021pixmatch},"PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency Training",http://arxiv.org/abs/2105.08128v1,"Unsupervised domain adaptation is a promising technique for semantic segmentation and other computer vision tasks for which large-scale data annotation is costly and time-consuming. In semantic segmentation, it is attractive to train models on annotated images from a simulated (source) domain and deploy them on real (target) domains. In this work, we present a novel framework for unsupervised domain adaptation based on the notion of target-domain consistency training. Intuitively, our work is based on the idea that in order to perform well on the target domain, a model's output should be consistent with respect to small perturbations of inputs in the target domain. Specifically, we introduce a new loss term to enforce pixelwise consistency between the model's predictions on a target image and a perturbed version of the same image. In comparison to popular adversarial adaptation methods, our approach is simpler, easier to implement, and more memory-efficient during training. Experiments and extensive ablation studies demonstrate that our simple approach achieves remarkably strong results on two challenging synthetic-to-real benchmarks, GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes. Code is available at: https://github.com/lukemelas/pixmatch",True,True,"Melas-Kyriazi, Luke and Manrai, Arjun K",2021.0,,,,,"PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency Training",Unsupervised Domain Adaptation via Pixelwise Consistency Training,https://arxiv.org/abs/2105.08128,"PixMatch is an unsupervised domain adaptation method using target-domain consistency training, enforcing pixelwise consistency between predictions and" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,hoyer2022daformer,\cite{hoyer2022daformer},"DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation",http://arxiv.org/abs/2111.14887v2,"As acquiring pixel-wise annotations of real-world images for semantic segmentation is a costly process, a model can instead be trained with more accessible synthetic data and adapted to real images without requiring their annotations. This process is studied in unsupervised domain adaptation (UDA). Even though a large number of methods propose new adaptation strategies, they are mostly based on outdated network architectures. As the influence of recent network architectures has not been systematically studied, we first benchmark different network architectures for UDA and newly reveal the potential of Transformers for UDA semantic segmentation. Based on the findings, we propose a novel UDA method, DAFormer. The network architecture of DAFormer consists of a Transformer encoder and a multi-level context-aware feature fusion decoder. It is enabled by three simple but crucial training strategies to stabilize the training and to avoid overfitting to the source domain: While (1) Rare Class Sampling on the source domain improves the quality of the pseudo-labels by mitigating the confirmation bias of self-training toward common classes, (2) a Thing-Class ImageNet Feature Distance and (3) a learning rate warmup promote feature transfer from ImageNet pretraining. DAFormer represents a major advance in UDA. It improves the state of the art by 10.8 mIoU for GTA-to-Cityscapes and 5.4 mIoU for Synthia-to-Cityscapes and enables learning even difficult classes such as train, bus, and truck well. The implementation is available at https://github.com/lhoyer/DAFormer.",True,True,"Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc",2022.0,,,,,"DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation",lhoyer/DAFormer: [CVPR22] Official Implementation of ...,https://github.com/lhoyer/DAFormer,"DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation. by Lukas Hoyer, Dengxin Dai, and Luc Van Gool." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,hoyer2022hrda,\cite{hoyer2022hrda},"HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation",http://arxiv.org/abs/2204.13132v2,"Unsupervised domain adaptation (UDA) aims to adapt a model trained on the source domain (e.g. synthetic data) to the target domain (e.g. real-world data) without requiring further annotations on the target domain. This work focuses on UDA for semantic segmentation as real-world pixel-wise annotations are particularly expensive to acquire. As UDA methods for semantic segmentation are usually GPU memory intensive, most previous methods operate only on downscaled images. We question this design as low-resolution predictions often fail to preserve fine details. The alternative of training with random crops of high-resolution images alleviates this problem but falls short in capturing long-range, domain-robust context information. Therefore, we propose HRDA, a multi-resolution training approach for UDA, that combines the strengths of small high-resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention, while maintaining a manageable GPU memory footprint. HRDA enables adapting small objects and preserving fine segmentation details. It significantly improves the state-of-the-art performance by 5.5 mIoU for GTA-to-Cityscapes and 4.9 mIoU for Synthia-to-Cityscapes, resulting in unprecedented 73.8 and 65.8 mIoU, respectively. The implementation is available at https://github.com/lhoyer/HRDA.",True,True,"Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc",2022.0,,,,,"HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation",[PDF] HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic ...,https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136900370.pdf,"HRDA is a multi-resolution training approach for UDA, using high-resolution crops for details and low-resolution for context, with a learned scale attention." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,zou2018unsupervised,\cite{zou2018unsupervised},Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,,,True,False,"Zou, Yang and Yu, Zhiding and Kumar, BVK and Wang, Jinsong",2018.0,,,,,Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,Unsupervised Domain Adaptation for Semantic ...,https://openaccess.thecvf.com/content_ECCV_2018/papers/Yang_Zou_Unsupervised_Domain_Adaptation_ECCV_2018_paper.pdf,by Y Zou · 2018 · Cited by 1832 — A class-balanced self-training (CBST) is introduced to overcome the imbalance issue of transfer- ring difficulty among classes via generating pseudo-labels with Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,chen2019domain,\cite{chen2019domain},Domain adaptation for semantic segmentation with maximum squares loss,,,True,False,"Chen, Minghao and Xue, Hongyang and Cai, Deng",2019.0,,,,,Domain adaptation for semantic segmentation with maximum squares loss,Domain Adaptation for Semantic Segmentation with Maximum Squares Loss,http://arxiv.org/pdf/1909.13589v1,"Deep neural networks for semantic segmentation always require a large number of samples with pixel-level labels, which becomes the major difficulty in their real-world applications. To reduce the labeling cost, unsupervised domain adaptation (UDA) approaches are proposed to transfer knowledge from labeled synthesized datasets to unlabeled real-world datasets. Recently, some semi-supervised learning methods have been applied to UDA and achieved state-of-the-art performance. One of the most popular approaches in semi-supervised learning is the entropy minimization method. However, when applying the entropy minimization to UDA for semantic segmentation, the gradient of the entropy is biased towards samples that are easy to transfer. To balance the gradient of well-classified target samples, we propose the maximum squares loss. Our maximum squares loss prevents the training process being dominated by easy-to-transfer samples in the target domain. Besides, we introduce the image-wise weighting ratio to alleviate the class imbalance in the unlabeled target domain. Both synthetic-to-real and cross-city adaptation experiments demonstrate the effectiveness of our proposed approach. The code is released at https://github. com/ZJULearning/MaxSquareLoss." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,zou2019confidence,\cite{zou2019confidence},Confidence Regularized Self-Training,http://arxiv.org/abs/1908.09822v3,"Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.",True,True,"Zou, Yang and Yu, Zhiding and Liu, Xiaofeng and Kumar, BVK and Wang, Jinsong",2019.0,,,,,Confidence Regularized Self-Training,[1908.09822] Confidence Regularized Self-Training - arXiv,https://arxiv.org/abs/1908.09822,"We propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,wang2021domain,\cite{wang2021domain},Domain adaptive semantic segmentation with self-supervised depth estimation,,,True,False,"Wang, Qin and Dai, Dengxin and Hoyer, Lukas and Van Gool, Luc and Fink, Olga",2021.0,,,,,Domain adaptive semantic segmentation with self-supervised depth estimation,[PDF] Domain Adaptive Semantic Segmentation With Self-Supervised ...,https://openaccess.thecvf.com/content/ICCV2021/papers/Wang_Domain_Adaptive_Semantic_Segmentation_With_Self-Supervised_Depth_Estimation_ICCV_2021_paper.pdf,"Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Qin Wang1 Dengxin Dai1,2* Lukas Hoyer1 Luc Van Gool1,3 Olga Fink1 1ETH Zurich, Switzerland 2MPI for Informatics, Germany 3KU Lueven, Belgium {qwang,lhoyer,ofink}@ethz.ch {dai,vangool}@vision.ee.ethz.ch Abstract Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distri-bution shift between source and target domain. We propose to use self-supervised depth estima-tion (green) to improve semantic segmentation performance under the unsupervised domain adaptation setup. The additional self-supervised depth estimation can fa-cilitate us to explicitly learn the correlation between tasks to 1 8515 improve the final semantic segmentation performance. By exploit-ing the supervision from self-supervised depth estimation and learning the correlation between semantics and depth, the proposed method achieves 55.0% mIoU (stereo depth) on this task." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,lian2019constructing,\cite{lian2019constructing},"Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach",http://arxiv.org/abs/1908.09547v1,"We propose a new approach, called self-motivated pyramid curriculum domain adaptation (PyCDA), to facilitate the adaptation of semantic segmentation neural networks from synthetic source domains to real target domains. Our approach draws on an insight connecting two existing works: curriculum domain adaptation and self-training. Inspired by the former, PyCDA constructs a pyramid curriculum which contains various properties about the target domain. Those properties are mainly about the desired label distributions over the target domain images, image regions, and pixels. By enforcing the segmentation neural network to observe those properties, we can improve the network's generalization capability to the target domain. Motivated by the self-training, we infer this pyramid of properties by resorting to the semantic segmentation network itself. Unlike prior work, we do not need to maintain any additional models (e.g., logistic regression or discriminator networks) or to solve minmax problems which are often difficult to optimize. We report state-of-the-art results for the adaptation from both GTAV and SYNTHIA to Cityscapes, two popular settings in unsupervised domain adaptation for semantic segmentation.",True,True,"Lian, Qing and Lv, Fengmao and Duan, Lixin and Gong, Boqing",2019.0,,,,,"Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach",lianqing11/PyCDA - A Non-Adversarial Approach,https://github.com/lianqing11/PyCDA,PyCDA. Code for Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach.See more Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,li2019bidirectional,\cite{li2019bidirectional},Bidirectional Learning for Domain Adaptation of Semantic Segmentation,http://arxiv.org/abs/1904.10620v1,"Domain adaptation for semantic image segmentation is very necessary since manually labeling large datasets with pixel-level labels is expensive and time consuming. Existing domain adaptation techniques either work on limited datasets, or yield not so good performance compared with supervised learning. In this paper, we propose a novel bidirectional learning framework for domain adaptation of segmentation. Using the bidirectional learning, the image translation model and the segmentation adaptation model can be learned alternatively and promote to each other. Furthermore, we propose a self-supervised learning algorithm to learn a better segmentation adaptation model and in return improve the image translation model. Experiments show that our method is superior to the state-of-the-art methods in domain adaptation of segmentation with a big margin. The source code is available at https://github.com/liyunsheng13/BDL.",True,True,"Li, Yunsheng and Yuan, Lu and Vasconcelos, Nuno",2019.0,,,,,Bidirectional Learning for Domain Adaptation of Semantic Segmentation,Bidirectional Learning for Domain Adaptation of Semantic Segmentation,http://arxiv.org/pdf/1904.10620v1,"Domain adaptation for semantic image segmentation is very necessary since manually labeling large datasets with pixel-level labels is expensive and time consuming. Existing domain adaptation techniques either work on limited datasets, or yield not so good performance compared with supervised learning. In this paper, we propose a novel bidirectional learning framework for domain adaptation of segmentation. Using the bidirectional learning, the image translation model and the segmentation adaptation model can be learned alternatively and promote to each other. Furthermore, we propose a self-supervised learning algorithm to learn a better segmentation adaptation model and in return improve the image translation model. Experiments show that our method is superior to the state-of-the-art methods in domain adaptation of segmentation with a big margin. The source code is available at https://github.com/liyunsheng13/BDL." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,wang2021uncertainty,\cite{wang2021uncertainty},Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation,,,True,False,"Wang, Yuxi and Peng, Junran and Zhang, ZhaoXiang",2021.0,,,,,Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation,[PDF] Uncertainty-Aware Pseudo Label Refinery for Domain Adaptive ...,https://openaccess.thecvf.com/content/ICCV2021/papers/Wang_Uncertainty-Aware_Pseudo_Label_Refinery_for_Domain_Adaptive_Semantic_Segmentation_ICCV_2021_paper.pdf,Domain Adaptation for Semantic Segmentation (DASS) aims to train a network that can assign pixel-level labels to unlabeled target data by learning from labeled Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,zhang2021prototypical,\cite{zhang2021prototypical},"Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation",http://arxiv.org/abs/2101.10979v2,"Self-training is a competitive approach in domain adaptive segmentation, which trains the network with the pseudo labels on the target domain. However inevitably, the pseudo labels are noisy and the target features are dispersed due to the discrepancy between source and target domains. In this paper, we rely on representative prototypes, the feature centroids of classes, to address the two issues for unsupervised domain adaptation. In particular, we take one step further and exploit the feature distances from prototypes that provide richer information than mere prototypes. Specifically, we use it to estimate the likelihood of pseudo labels to facilitate online correction in the course of training. Meanwhile, we align the prototypical assignments based on relative feature distances for two different views of the same target, producing a more compact target feature space. Moreover, we find that distilling the already learned knowledge to a self-supervised pretrained model further boosts the performance. Our method shows tremendous performance advantage over state-of-the-art methods. We will make the code publicly available.",True,True,"Zhang, Pan and Zhang, Bo and Zhang, Ting and Chen, Dong and Wang, Yong and Wen, Fang",2021.0,,,,,"Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation",Prototypical Pseudo Label Denoising and Target Structure ...,https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Prototypical_Pseudo_Label_Denoising_and_Target_Structure_Learning_for_Domain_CVPR_2021_paper.pdf,"by P Zhang · 2021 · Cited by 674 — This paper uses prototypes to address noisy pseudo labels in unsupervised domain adaptation, online correcting them and aligning soft assignments for a compact" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,tranheden2021dacs,\cite{tranheden2021dacs},DACS: Domain Adaptation via Cross-domain Mixed Sampling,http://arxiv.org/abs/2007.08702v2,"Semantic segmentation models based on convolutional neural networks have recently displayed remarkable performance for a multitude of applications. However, these models typically do not generalize well when applied on new domains, especially when going from synthetic to real data. In this paper we address the problem of unsupervised domain adaptation (UDA), which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain). Existing methods have seen success by training on pseudo-labels for these unlabelled images. Multiple techniques have been proposed to mitigate low-quality pseudo-labels arising from the domain shift, with varying degrees of success. We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-labels. These mixed samples are then trained on, in addition to the labelled data itself. We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes, a common synthetic-to-real semantic segmentation benchmark for UDA.",True,True,"Tranheden, Wilhelm and Olsson, Viktor and Pinto, Juliano and Svensson, Lennart",2021.0,,,,,DACS: Domain Adaptation via Cross-domain Mixed Sampling,DACS: Domain Adaptation via Cross-domain Mixed Sampling - arXiv,https://arxiv.org/abs/2007.08702,"We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,you2019universal,\cite{you2019universal},Universal Multi-Source Domain Adaptation,http://arxiv.org/abs/2011.02594v1,"Unsupervised domain adaptation enables intelligent models to transfer knowledge from a labeled source domain to a similar but unlabeled target domain. Recent study reveals that knowledge can be transferred from one source domain to another unknown target domain, called Universal Domain Adaptation (UDA). However, in the real-world application, there are often more than one source domain to be exploited for domain adaptation. In this paper, we formally propose a more general domain adaptation setting, universal multi-source domain adaptation (UMDA), where the label sets of multiple source domains can be different and the label set of target domain is completely unknown. The main challenges in UMDA are to identify the common label set between each source domain and target domain, and to keep the model scalable as the number of source domains increases. To address these challenges, we propose a universal multi-source adaptation network (UMAN) to solve the domain adaptation problem without increasing the complexity of the model in various UMDA settings. In UMAN, we estimate the reliability of each known class in the common label set via the prediction margin, which helps adversarial training to better align the distributions of multiple source domains and target domain in the common label set. Moreover, the theoretical guarantee for UMAN is also provided. Massive experimental results show that existing UDA and multi-source DA (MDA) methods cannot be directly applied to UMDA and the proposed UMAN achieves the state-of-the-art performance in various UMDA settings.",True,True,"You, Kaichao and Long, Mingsheng and Cao, Zhangjie and Wang, Jianmin and Jordan, Michael I",2019.0,,,,,Universal Multi-Source Domain Adaptation,[2011.02594] Universal Multi-Source Domain Adaptation - arXiv,https://arxiv.org/abs/2011.02594,"In this paper, we formally propose a more general domain adaptation setting, universal multi-source domain adaptation (UMDA), where the label sets of multiple" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,fu2020learning,\cite{fu2020learning},Learning to detect open classes for universal domain adaptation,,,True,False,"Fu, Bo and Cao, Zhangjie and Long, Mingsheng and Wang, Jianmin",2020.0,,,,,Learning to detect open classes for universal domain adaptation,Learning to Detect Open Classes for Universal Domain ...,https://paperswithcode.com/paper/learning-to-detect-open-classes-for-universal,"Universal domain adaptation (UDA) transfers knowledge between domains without any constraint on the label sets, extending the applicability of domain" Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,bucci2020effectiveness,\cite{bucci2020effectiveness},On the Effectiveness of Image Rotation for Open Set Domain Adaptation,http://arxiv.org/abs/2007.12360v1,"Open Set Domain Adaptation (OSDA) bridges the domain gap between a labeled source domain and an unlabeled target domain, while also rejecting target classes that are not present in the source. To avoid negative transfer, OSDA can be tackled by first separating the known/unknown target samples and then aligning known target samples with the source data. We propose a novel method to addresses both these problems using the self-supervised task of rotation recognition. Moreover, we assess the performance with a new open set metric that properly balances the contribution of recognizing the known classes and rejecting the unknown samples. Comparative experiments with existing OSDA methods on the standard Office-31 and Office-Home benchmarks show that: (i) our method outperforms its competitors, (ii) reproducibility for this field is a crucial issue to tackle, (iii) our metric provides a reliable tool to allow fair open set evaluation.",True,True,"Bucci, Silvia and Loghmani, Mohammad Reza and Tommasi, Tatiana",2020.0,,,,,On the Effectiveness of Image Rotation for Open Set Domain Adaptation,On the Effectiveness of Image Rotation for Open Set Domain Adaptation,http://arxiv.org/pdf/2007.12360v1,"Open Set Domain Adaptation (OSDA) bridges the domain gap between a labeled source domain and an unlabeled target domain, while also rejecting target classes that are not present in the source. To avoid negative transfer, OSDA can be tackled by first separating the known/unknown target samples and then aligning known target samples with the source data. We propose a novel method to addresses both these problems using the self-supervised task of rotation recognition. Moreover, we assess the performance with a new open set metric that properly balances the contribution of recognizing the known classes and rejecting the unknown samples. Comparative experiments with existing OSDA methods on the standard Office-31 and Office-Home benchmarks show that: (i) our method outperforms its competitors, (ii) reproducibility for this field is a crucial issue to tackle, (iii) our metric provides a reliable tool to allow fair open set evaluation." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,saito2020universal,\cite{saito2020universal},Universal Domain Adaptation through Self Supervision,http://arxiv.org/abs/2002.07953v3,"Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation framework that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy. We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings. Implementation is available at https://github.com/VisionLearningGroup/DANCE.",True,True,"Saito, Kuniaki and Kim, Donghyun and Sclaroff, Stan and Saenko, Kate",2020.0,,,,Advances in neural information processing systems,Universal Domain Adaptation through Self Supervision,Universal Domain Adaptation through Self Supervision,http://arxiv.org/pdf/2002.07953v3,"Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation framework that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy. We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings. Implementation is available at https://github.com/VisionLearningGroup/DANCE." Universal Domain Adaptation for Semantic Segmentation,2505.22458v1,saito2021ovanet,\cite{saito2021ovanet},OVANet: One-vs-All Network for Universal Domain Adaptation,http://arxiv.org/abs/2104.03344v4,"Universal Domain Adaptation (UNDA) aims to handle both domain-shift and category-shift between two datasets, where the main challenge is to transfer knowledge while rejecting unknown classes which are absent in the labeled source data but present in the unlabeled target data. Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples, but this strategy is not practical. In this paper, we propose a method to learn the threshold using source samples and to adapt it to the target domain. Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target. To learn the inter-and intra-class distance, we propose to train a one-vs-all classifier for each class using labeled source data. Then, we adapt the open-set classifier to the target domain by minimizing class entropy. The resulting framework is the simplest of all baselines of UNDA and is insensitive to the value of a hyper-parameter yet outperforms baselines with a large margin.",True,True,"Saito, Kuniaki and Saenko, Kate",2021.0,,,,,OVANet: One-vs-All Network for Universal Domain Adaptation,One-vs-All Network for Universal Domain Adaptation,https://arxiv.org/abs/2104.03344,"by K Saito · 2021 · Cited by 203 — We propose to train a one-vs-all classifier for each class using labeled source data. Then, we adapt the open-set classifier to the target domain by minimizing" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,sugimoto2004obstacle,\cite{sugimoto2004obstacle},Obstacle detection using millimeter-wave radar and its visualization on image sequence,,,True,False,"Sugimoto, Shigeki and Tateda, Hayato and Takahashi, Hidekazu and Okutomi, Masatoshi",2004.0,,,,,Obstacle detection using millimeter-wave radar and its visualization on image sequence,Obstacle detection using millimeter-wave radar and its visualization ...,https://ieeexplore.ieee.org/iel5/9258/29387/01334537.pdf,This section presents a calibration result between the sensors along with segmentation and vi- sualization results using real radar/image frame sequences. RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,wang2011integrating,\cite{wang2011integrating},Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications,,,True,False,"Wang, Tao and Zheng, Nanning and Xin, Jingmin and Ma, Zheng",2011.0,,,,Sensors,Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications,Integrating millimeter wave radar with a monocular vision sensor for ...,https://pubmed.ncbi.nlm.nih.gov/22164117/,This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,kim2014data,\cite{kim2014data},Data fusion of radar and image measurements for multi-object tracking via Kalman filtering,,,True,False,"Kim, Du Yong and Jeon, Moongu",2014.0,,,,Information Sciences,Data fusion of radar and image measurements for multi-object tracking via Kalman filtering,(PDF) Data fusion of radar and image measurements for multi-object ...,https://www.researchgate.net/publication/278072957_Data_fusion_of_radar_and_image_measurements_for_multi-object_tracking_via_Kalman_filtering,Data fusion of radar and image measurements for multi-object tracking via Kalman filtering. September 2014; Information Sciences 278:641-652. RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,kim2018radar,\cite{kim2018radar},Radar and vision sensor fusion for object detection in autonomous vehicle surroundings,,,True,False,"Kim, Jihun and Han, Dong Seog and Senouci, Benaoumeur",2018.0,,,,,Radar and vision sensor fusion for object detection in autonomous vehicle surroundings,Radar and Vision Sensor Fusion for Object Detection ... - IEEE Xplore,https://ieeexplore.ieee.org/document/8436959,Radar and Vision Sensor Fusion for Object Detection in Autonomous Vehicle Surroundings | IEEE Conference Publication | IEEE Xplore * IEEE _Xplore_ Publisher: IEEE Multi-sensor data fusion for advanced driver assistance systems (ADAS) in the automotive industry has received much attention recently due to the emergence of self-drivin...Show More Multi-sensor data fusion for advanced driver assistance systems (ADAS) in the automotive industry has received much attention recently due to the emergence of self-driving vehicles and road traffic safety applications. Publisher: IEEE Image 4: Contact IEEE to Subscribe About IEEE _Xplore_ | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | IEEE Privacy Policy ### IEEE Account * About IEEE _Xplore_ RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,kim2017comparative,\cite{kim2017comparative},Comparative analysis of RADAR-IR sensor fusion methods for object detection,,,True,False,"Kim, Taehwan and Kim, Sungho and Lee, Eunryung and Park, Miryong",2017.0,,,,,Comparative analysis of RADAR-IR sensor fusion methods for object detection,Comparative analysis of RADAR-IR sensor fusion methods for ...,https://ieeexplore.ieee.org/document/8204237/,This paper presents the Radar and IR sensor fusion method for objection detection. The infrared camera parameter calibration with Levenberg-Marquardt (LM) RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,el2015radar,\cite{el2015radar},Radar and vision sensors calibration for outdoor 3D reconstruction,,,True,False,"El Natour, Ghina and Aider, Omar Ait and Rouveure, Raphael and Berry, Fran{\c{c}}ois and Faure, Patrice",2015.0,,,,,Radar and vision sensors calibration for outdoor 3D reconstruction,Radar and vision sensors calibration for outdoor 3D reconstruction,https://ieeexplore.ieee.org/document/7139473/,"In this paper we introduce a new geometric calibration algorithm, and a geometric method of 3D reconstruction using a panoramic microwave radar and a camera" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,li2023automatic,\cite{li2023automatic},Automatic targetless LiDAR--camera calibration: a survey,,,True,False,"Li, Xingchen and Xiao, Yuxuan and Wang, Beibei and Ren, Haojie and Zhang, Yanyong and Ji, Jianmin",2023.0,,,,Artificial Intelligence Review,Automatic targetless LiDAR--camera calibration: a survey,Automatic targetless LiDAR–camera calibration: a survey,https://link.springer.com/article/10.1007/s10462-022-10317-y,This paper reviews the existing calibration algorithms for automatic targetless calibration between LiDARs and cameras. Unmanned intelligent RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,pandey2012automatic,\cite{pandey2012automatic},Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information,,,True,False,"Pandey, Gaurav and McBride, James and Savarese, Silvio and Eustice, Ryan",2012.0,,,,,Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information,(PDF) Automatic Targetless Extrinsic Calibration of a 3D Lidar and ...,https://www.researchgate.net/publication/267843813_Automatic_Targetless_Extrinsic_Calibration_of_a_3D_Lidar_and_Camera_by_Maximizing_Mutual_Information,"This paper reports on an algorithm for automatic, targetless, extrinsic calibration of a lidar and optical camera system based upon the maximization of mutual" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,taylor2015motion,\cite{taylor2015motion},Motion-based calibration of multimodal sensor arrays,,,True,False,"Taylor, Zachary and Nieto, Juan",2015.0,,,,,Motion-based calibration of multimodal sensor arrays,(PDF) Motion-Based Calibration of Multimodal Sensor Arrays,https://www.researchgate.net/publication/273576814_Motion-Based_Calibration_of_Multimodal_Sensor_Arrays,This paper formulates a new pipeline for automated extrinsic calibration of multi-sensor mobile platforms. The new method can operate on any combination of RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,levinson2013automatic,\cite{levinson2013automatic},Automatic online calibration of cameras and lasers.,,,True,False,"Levinson, Jesse and Thrun, Sebastian",2013.0,,,,,Automatic online calibration of cameras and lasers.,Automatic Online Calibration of Cameras and Lasers,https://www.roboticsproceedings.org/rss09/p29.pdf,"by J Levinson · Cited by 379 — In this paper, we introduce two new real-time techniques that enable camera-laser calibration online, automatically, and in arbitrary environments. The" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,yuan2021pixel,\cite{yuan2021pixel},"Pixel-level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments",http://arxiv.org/abs/2103.01627v2,"In this letter, we present a novel method for automatic extrinsic calibration of high-resolution LiDARs and RGB cameras in targetless environments. Our approach does not require checkerboards but can achieve pixel-level accuracy by aligning natural edge features in the two sensors. On the theory level, we analyze the constraints imposed by edge features and the sensitivity of calibration accuracy with respect to edge distribution in the scene. On the implementation level, we carefully investigate the physical measuring principles of LiDARs and propose an efficient and accurate LiDAR edge extraction method based on point cloud voxel cutting and plane fitting. Due to the edges' richness in natural scenes, we have carried out experiments in many indoor and outdoor scenes. The results show that this method has high robustness, accuracy, and consistency. It can promote the research and application of the fusion between LiDAR and camera. We have open-sourced our code on GitHub to benefit the community.",True,True,"Yuan, Chongjian and Liu, Xiyuan and Hong, Xiaoping and Zhang, Fu",2021.0,,,,IEEE Robotics and Automation Letters,"Pixel-level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments",Pixel-level Extrinsic Self Calibration of High Resolution LiDAR and ...,https://arxiv.org/abs/2103.01627,"In this letter, we present a novel method for automatic extrinsic calibration of high-resolution LiDARs and RGB cameras in targetless environments." RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,schneider2017regnet,\cite{schneider2017regnet},RegNet: Multimodal Sensor Registration Using Deep Neural Networks,http://arxiv.org/abs/1707.03167v1,"In this paper, we present RegNet, the first deep convolutional neural network (CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between multimodal sensors, exemplified using a scanning LiDAR and a monocular camera. Compared to existing approaches, RegNet casts all three conventional calibration steps (feature extraction, feature matching and global regression) into a single real-time capable CNN. Our method does not require any human interaction and bridges the gap between classical offline and target-less online calibration approaches as it provides both a stable initial estimation as well as a continuous online correction of the extrinsic parameters. During training we randomly decalibrate our system in order to train RegNet to infer the correspondence between projected depth measurements and RGB image and finally regress the extrinsic calibration. Additionally, with an iterative execution of multiple CNNs, that are trained on different magnitudes of decalibration, our approach compares favorably to state-of-the-art methods in terms of a mean calibration error of 0.28 degrees for the rotational and 6 cm for the translation components even for large decalibrations up to 1.5 m and 20 degrees.",True,True,"Schneider, Nick and Piewak, Florian and Stiller, Christoph and Franke, Uwe",2017.0,,,,,RegNet: Multimodal Sensor Registration Using Deep Neural Networks,RegNet: Multimodal Sensor Registration Using Deep Neural Networks,http://arxiv.org/pdf/1707.03167v1,"In this paper, we present RegNet, the first deep convolutional neural network (CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between multimodal sensors, exemplified using a scanning LiDAR and a monocular camera. Compared to existing approaches, RegNet casts all three conventional calibration steps (feature extraction, feature matching and global regression) into a single real-time capable CNN. Our method does not require any human interaction and bridges the gap between classical offline and target-less online calibration approaches as it provides both a stable initial estimation as well as a continuous online correction of the extrinsic parameters. During training we randomly decalibrate our system in order to train RegNet to infer the correspondence between projected depth measurements and RGB image and finally regress the extrinsic calibration. Additionally, with an iterative execution of multiple CNNs, that are trained on different magnitudes of decalibration, our approach compares favorably to state-of-the-art methods in terms of a mean calibration error of 0.28 degrees for the rotational and 6 cm for the translation components even for large decalibrations up to 1.5 m and 20 degrees." RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,iyer2018calibnet,\cite{iyer2018calibnet},"CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks",http://arxiv.org/abs/1803.08181v2,"3D LiDARs and 2D cameras are increasingly being used alongside each other in sensor rigs for perception tasks. Before these sensors can be used to gather meaningful data, however, their extrinsics (and intrinsics) need to be accurately calibrated, as the performance of the sensor rig is extremely sensitive to these calibration parameters. A vast majority of existing calibration techniques require significant amounts of data and/or calibration targets and human effort, severely impacting their applicability in large-scale production systems. We address this gap with CalibNet: a self-supervised deep network capable of automatically estimating the 6-DoF rigid body transformation between a 3D LiDAR and a 2D camera in real-time. CalibNet alleviates the need for calibration targets, thereby resulting in significant savings in calibration efforts. During training, the network only takes as input a LiDAR point cloud, the corresponding monocular image, and the camera calibration matrix K. At train time, we do not impose direct supervision (i.e., we do not directly regress to the calibration parameters, for example). Instead, we train the network to predict calibration parameters that maximize the geometric and photometric consistency of the input images and point clouds. CalibNet learns to iteratively solve the underlying geometric problem and accurately predicts extrinsic calibration parameters for a wide range of mis-calibrations, without requiring retraining or domain adaptation. The project page is hosted at https://epiception.github.io/CalibNet",True,True,"Iyer, Ganesh and Ram, R Karnik and Murthy, J Krishna and Krishna, K Madhava",2018.0,,,,,"CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks",CalibNet: Geometrically Supervised Extrinsic Calibration ...,https://dl.acm.org/doi/10.1109/IROS.2018.8593693,by G Iyer · 2018 · Cited by 247 — CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks. Authors: Ganesh Iyer. Ganesh Iyer. Robotics Research Center RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,shi2020calibrcnn,\cite{shi2020calibrcnn},Calibrcnn: Calibrating camera and lidar by recurrent convolutional neural network and geometric constraints,,,True,False,"Shi, Jieying and Zhu, Ziheng and Zhang, Jianhua and Liu, Ruyu and Wang, Zhenhua and Chen, Shengyong and Liu, Honghai",2020.0,,,,,Calibrcnn: Calibrating camera and lidar by recurrent convolutional neural network and geometric constraints,Calibrating Camera and LiDAR by recurrent convolutional neural ...,https://researchportal.port.ac.uk/en/publications/calibrcnn(a901bae3-8f6e-49d3-89e2-1c503f95db11).html,Missing: 04/08/2025 RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,sak2014long,\cite{sak2014long},Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition,,,True,False,"Sak, Ha{\c{s}}im and Senior, Andrew and Beaufays, Fran{\c{c}}oise",2014.0,,,,arXiv preprint arXiv:1402.1128,Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition,long short-term memory based recurrent neural network ... - ar5iv,https://ar5iv.labs.arxiv.org/html/1402.1128,"In this paper, we show that LSTM based RNN architectures can obtain state of the art performance in a large vocabulary speech recognition system with thousands" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,lv2021lccnet,\cite{lv2021lccnet},LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network,http://arxiv.org/abs/2012.13901v2,"In this paper, we propose a novel online self-calibration approach for Light Detection and Ranging (LiDAR) and camera sensors. Compared to the previous CNN-based methods that concatenate the feature maps of the RGB image and decalibrated depth image, we exploit the cost volume inspired by the PWC-Net for feature matching. Besides the smooth L1-Loss of the predicted extrinsic calibration parameters, an additional point cloud loss is applied. Instead of regress the extrinsic parameters between LiDAR and camera directly, we predict the decalibrated deviation from initial calibration to the ground truth. During inference, the calibration error decreases further with the usage of iterative refinement and the temporal filtering approach. The evaluation results on the KITTI dataset illustrate that our approach outperforms CNN-based state-of-the-art methods in terms of a mean absolute calibration error of 0.297cm in translation and 0.017{\deg} in rotation with miscalibration magnitudes of up to 1.5m and 20{\deg}.",True,True,"Lv, Xudong and Wang, Boya and Dou, Ziwen and Ye, Dong and Wang, Shuo",2021.0,,,,,LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network,LCCNet: LiDAR and Camera Self-Calibration using Cost ...,https://arxiv.org/abs/2012.13901,"by X Lv · 2020 · Cited by 175 — Abstract:In this paper, we propose a novel online self-calibration approach for Light Detection and Ranging (LiDAR) and camera sensors.See more" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,pervsic2021online,\cite{pervsic2021online},Online multi-sensor calibration based on moving object tracking,,,True,False,"Per{\v{s}}i{\'c}, Juraj and Petrovi{\'c}, Luka and Markovi{\'c}, Ivan and Petrovi{\'c}, Ivan",2021.0,,,,Advanced Robotics,Online multi-sensor calibration based on moving object tracking,Online multi-sensor calibration based on moving object tracking,https://www.researchgate.net/publication/345092954_Online_multi-sensor_calibration_based_on_moving_object_tracking,Peršić et al. [5] propose an online targetless multi-sensor calibration method based on the detection and tracking of moving objects. It employs the tracking- RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,scholler2019targetless,\cite{scholler2019targetless},"Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems",http://arxiv.org/abs/1904.08743v2,"Most intelligent transportation systems use a combination of radar sensors and cameras for robust vehicle perception. The calibration of these heterogeneous sensor types in an automatic fashion during system operation is challenging due to differing physical measurement principles and the high sparsity of traffic radars. We propose - to the best of our knowledge - the first data-driven method for automatic rotational radar-camera calibration without dedicated calibration targets. Our approach is based on a coarse and a fine convolutional neural network. We employ a boosting-inspired training algorithm, where we train the fine network on the residual error of the coarse network. Due to the unavailability of public datasets combining radar and camera measurements, we recorded our own real-world data. We demonstrate that our method is able to reach precise and robust sensor registration and show its generalization capabilities to different sensor alignments and perspectives.",True,True,"Sch{\""o}ller, Christoph and Schnettler, Maximilian and Kr{\""a}mmer, Annkathrin and Hinz, Gereon and Bakovic, Maida and G{\""u}zet, M{\""u}ge and Knoll, Alois",2019.0,,,,,"Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems",Targetless Rotational Auto-Calibration of Radar and Camera ... - arXiv,https://arxiv.org/abs/1904.08743,"Authors:Christoph Schöller, Maximilian Schnettler, Annkathrin Krämmer, Gereon Hinz, Maida Bakovic, Müge Güzet, Alois Knoll View a PDF of the paper titled Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems, by Christoph Sch\""oller and 6 other authors Comments:Accepted at the IEEE Intelligent Transportation Systems Conference (ITSC) 2019Subjects:Computer Vision and Pattern Recognition (cs.CV)Cite as:arXiv:1904.08743 [cs.CV] (or arXiv:1904.08743v2 [cs.CV] for this version) https://doi.org/10.48550/arXiv.1904.08743Focus to learn morearXiv-issued DOI via DataCite View a PDF of the paper titled Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems, by Christoph Sch\""oller and 6 other authors Bibliographic Explorer Toggle Connected Papers Toggle Litmaps Toggle alphaXiv Toggle Links to Code Toggle DagsHub Toggle GotitPub Toggle Links to Code Toggle ScienceCast Toggle Replicate Toggle" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,wise2021continuous,\cite{wise2021continuous},A Continuous-Time Approach for 3D Radar-to-Camera Extrinsic Calibration,http://arxiv.org/abs/2103.07505v2,"Reliable operation in inclement weather is essential to the deployment of safe autonomous vehicles (AVs). Robustness and reliability can be achieved by fusing data from the standard AV sensor suite (i.e., lidars, cameras) with weather robust sensors, such as millimetre-wavelength radar. Critically, accurate sensor data fusion requires knowledge of the rigid-body transform between sensor pairs, which can be determined through the process of extrinsic calibration. A number of extrinsic calibration algorithms have been designed for 2D (planar) radar sensors - however, recently-developed, low-cost 3D millimetre-wavelength radars are set to displace their 2D counterparts in many applications. In this paper, we present a continuous-time 3D radar-to-camera extrinsic calibration algorithm that utilizes radar velocity measurements and, unlike the majority of existing techniques, does not require specialized radar retroreflectors to be present in the environment. We derive the observability properties of our formulation and demonstrate the efficacy of our algorithm through synthetic and real-world experiments.",True,True,"Wise, Emmett and Per{\v{s}}i{\'c}, Juraj and Grebe, Christopher and Petrovi{\'c}, Ivan and Kelly, Jonathan",2021.0,,,,,A Continuous-Time Approach for 3D Radar-to-Camera Extrinsic Calibration,A Continuous-Time Approach for 3D Radar-to-Camera ...,https://dl.acm.org/doi/10.1109/ICRA48506.2021.9561938,"by E Wise · 2021 · Cited by 42 — In this paper, we present a continuous-time 3D radar-to-camera extrinsic calibration algorithm that utilizes radar velocity measurements and, unlike the" RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,2505.22427v1,wise2023spatiotemporal,\cite{wise2023spatiotemporal},"Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera Pairs",http://arxiv.org/abs/2211.01871v4,"Autonomous vehicles (AVs) fuse data from multiple sensors and sensing modalities to impart a measure of robustness when operating in adverse conditions. Radars and cameras are popular choices for use in sensor fusion; although radar measurements are sparse in comparison to camera images, radar scans penetrate fog, rain, and snow. However, accurate sensor fusion depends upon knowledge of the spatial transform between the sensors and any temporal misalignment that exists in their measurement times. During the life cycle of an AV, these calibration parameters may change, so the ability to perform in-situ spatiotemporal calibration is essential to ensure reliable long-term operation. State-of-the-art 3D radar-camera spatiotemporal calibration algorithms require bespoke calibration targets that are not readily available in the field. In this paper, we describe an algorithm for targetless spatiotemporal calibration that does not require specialized infrastructure. Our approach leverages the ability of the radar unit to measure its own ego-velocity relative to a fixed, external reference frame. We analyze the identifiability of the spatiotemporal calibration problem and determine the motions necessary for calibration. Through a series of simulation studies, we characterize the sensitivity of our algorithm to measurement noise. Finally, we demonstrate accurate calibration for three real-world systems, including a handheld sensor rig and a vehicle-mounted sensor array. Our results show that we are able to match the performance of an existing, target-based method, while calibrating in arbitrary, infrastructure-free environments.",True,True,"Wise, Emmett and Cheng, Qilong and Kelly, Jonathan",2023.0,,,,IEEE Transactions on Robotics,"Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera Pairs",Spatiotemporal Calibration of 3-D Millimetre-Wavelength Radar ...,http://ieeexplore.ieee.org/iel7/8860/10352149/10256219.pdf,"During calibration, the approach in [6] filters radar-camera measurement pairs by return intensity; the intensity is maximal for reflectors that lie on the" "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,ho2020ddpm,\cite{ho2020ddpm},Denoising Diffusion Probabilistic Models,http://arxiv.org/abs/2006.11239v2,"We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at https://github.com/hojonathanho/diffusion",True,True,"Ho, Jonathan and Jain, Ajay and Abbeel, Pieter",2020.0,,,,Advances in neural information processing systems,Denoising Diffusion Probabilistic Models,Denoising Diffusion Probabilistic Models,http://arxiv.org/pdf/2006.11239v2,"We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at https://github.com/hojonathanho/diffusion" "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,rombach2022ldm,\cite{rombach2022ldm},High-resolution image synthesis with latent diffusion models,,,True,False,"Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj{\""o}rn",2022.0,,,,,High-resolution image synthesis with latent diffusion models,[PDF] High-Resolution Image Synthesis With Latent Diffusion Models,https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf,"High-Resolution Image Synthesis with Latent Diffusion Models Robin Rombach1 ∗ Andreas Blattmann1 ∗ Dominik Lorenz1 Patrick Esser Bj¨ orn Ommer1 1Ludwig Maximilian University of Munich & IWR, Heidelberg University, Germany Runway ML https://github.com/CompVis/latent-diffusion Abstract By decomposing the image formation process into a se-quential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Our latent diffusion models (LDMs) achieve new state of the art scores for im-age inpainting and class-conditional image synthesis and highly competitive performance on various tasks, includ-ing unconditional image generation, text-to-image synthe-sis, and super-resolution, while significantly reducing com-putational requirements compared to pixel-based DMs. 1." "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,li2024qdm,\cite{li2024qdm},Q-dm: An efficient low-bit quantized diffusion model,,,True,False,"Li, Yanjing and Xu, Sheng and Cao, Xianbin and Sun, Xiao and Zhang, Baochang",2024.0,,,,Advances in Neural Information Processing Systems,Q-dm: An efficient low-bit quantized diffusion model,Q-DM: An Efficient Low-bit Quantized Diffusion Model,https://proceedings.neurips.cc/paper_files/paper/2023/hash/f1ee1cca0721de55bb35cf28ab95e1b4-Abstract-Conference.html,We propose an efficient Q-DM to calculate low-bit DMs by considering both training and inference process in the same framework. "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,zheng2024binarydm,\cite{zheng2024binarydm},Binarydm: Towards accurate binarization of diffusion model,,,True,False,"Zheng, Xingyu and Qin, Haotong and Ma, Xudong and Zhang, Mingyuan and Hao, Haojie and Wang, Jiakai and Zhao, Zixiang and Guo, Jinyang and Liu, Xianglong",2024.0,,,,arXiv preprint arXiv:2404.05662,Binarydm: Towards accurate binarization of diffusion model,BinaryDM: Towards Accurate Binarization of Diffusion Model,https://arxiv.org/abs/2404.05662v1/,"In this paper, we propose BinaryDM, a novel accurate quantization-aware training approach to push the weights of diffusion models towards the limit of 1-bit." "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,zheng2024bidm,\cite{zheng2024bidm},BiDM: Pushing the Limit of Quantization for Diffusion Models,http://arxiv.org/abs/2412.05926v1,"Diffusion models (DMs) have been significantly developed and widely used in various applications due to their excellent generative qualities. However, the expensive computation and massive parameters of DMs hinder their practical use in resource-constrained scenarios. As one of the effective compression approaches, quantization allows DMs to achieve storage saving and inference acceleration by reducing bit-width while maintaining generation performance. However, as the most extreme quantization form, 1-bit binarization causes the generation performance of DMs to face severe degradation or even collapse. This paper proposes a novel method, namely BiDM, for fully binarizing weights and activations of DMs, pushing quantization to the 1-bit limit. From a temporal perspective, we introduce the Timestep-friendly Binary Structure (TBS), which uses learnable activation binarizers and cross-timestep feature connections to address the highly timestep-correlated activation features of DMs. From a spatial perspective, we propose Space Patched Distillation (SPD) to address the difficulty of matching binary features during distillation, focusing on the spatial locality of image generation tasks and noise estimation networks. As the first work to fully binarize DMs, the W1A1 BiDM on the LDM-4 model for LSUN-Bedrooms 256$\times$256 achieves a remarkable FID of 22.74, significantly outperforming the current state-of-the-art general binarization methods with an FID of 59.44 and invalid generative samples, and achieves up to excellent 28.0 times storage and 52.7 times OPs savings. The code is available at https://github.com/Xingyu-Zheng/BiDM .",True,True,"Zheng, Xingyu and Liu, Xianglong and Bian, Yichen and Ma, Xudong and Zhang, Yulun and Wang, Jiakai and Guo, Jinyang and Qin, Haotong",2024.0,,,,arXiv preprint arXiv:2412.05926,BiDM: Pushing the Limit of Quantization for Diffusion Models,BiDM: Pushing the Limit of Quantization for Diffusion Models,http://arxiv.org/pdf/2412.05926v1,"Diffusion models (DMs) have been significantly developed and widely used in various applications due to their excellent generative qualities. However, the expensive computation and massive parameters of DMs hinder their practical use in resource-constrained scenarios. As one of the effective compression approaches, quantization allows DMs to achieve storage saving and inference acceleration by reducing bit-width while maintaining generation performance. However, as the most extreme quantization form, 1-bit binarization causes the generation performance of DMs to face severe degradation or even collapse. This paper proposes a novel method, namely BiDM, for fully binarizing weights and activations of DMs, pushing quantization to the 1-bit limit. From a temporal perspective, we introduce the Timestep-friendly Binary Structure (TBS), which uses learnable activation binarizers and cross-timestep feature connections to address the highly timestep-correlated activation features of DMs. From a spatial perspective, we propose Space Patched Distillation (SPD) to address the difficulty of matching binary features during distillation, focusing on the spatial locality of image generation tasks and noise estimation networks. As the first work to fully binarize DMs, the W1A1 BiDM on the LDM-4 model for LSUN-Bedrooms 256$\times$256 achieves a remarkable FID of 22.74, significantly outperforming the current state-of-the-art general binarization methods with an FID of 59.44 and invalid generative samples, and achieves up to excellent 28.0 times storage and 52.7 times OPs savings. The code is available at https://github.com/Xingyu-Zheng/BiDM ." "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,lu2024terdit,\cite{lu2024terdit},TerDiT: Ternary Diffusion Models with Transformers,http://arxiv.org/abs/2405.14854v2,"Recent developments in large-scale pre-trained text-to-image diffusion models have significantly improved the generation of high-fidelity images, particularly with the emergence of diffusion transformer models (DiTs). Among diffusion models, diffusion transformers have demonstrated superior image-generation capabilities, boosting lower FID scores and higher scalability. However, deploying large-scale DiT models can be expensive due to their excessive parameter numbers. Although existing research has explored efficient deployment techniques for diffusion models, such as model quantization, there is still little work concerning DiT-based models. To tackle this research gap, we propose TerDiT, the first quantization-aware training (QAT) and efficient deployment scheme for extremely low-bit diffusion transformer models. We focus on the ternarization of DiT networks, with model sizes ranging from 600M to 4.2B, and image resolution from 256$\times$256 to 512$\times$512. Our work contributes to the exploration of efficient deployment of large-scale DiT models, demonstrating the feasibility of training extremely low-bit DiT models from scratch while maintaining competitive image generation capacities compared to full-precision models. Our code and pre-trained TerDiT checkpoints have been released at https://github.com/Lucky-Lance/TerDiT.",True,True,"Lu, Xudong and Zhou, Aojun and Lin, Ziyi and Liu, Qi and Xu, Yuhui and Zhang, Renrui and Wen, Yafei and Ren, Shuai and Gao, Peng and Yan, Junchi and others",2024.0,,,,arXiv preprint arXiv:2405.14854,TerDiT: Ternary Diffusion Models with Transformers,TerDiT: Ternary Diffusion Models with Transformers,http://arxiv.org/pdf/2405.14854v2,"Recent developments in large-scale pre-trained text-to-image diffusion models have significantly improved the generation of high-fidelity images, particularly with the emergence of diffusion transformer models (DiTs). Among diffusion models, diffusion transformers have demonstrated superior image-generation capabilities, boosting lower FID scores and higher scalability. However, deploying large-scale DiT models can be expensive due to their excessive parameter numbers. Although existing research has explored efficient deployment techniques for diffusion models, such as model quantization, there is still little work concerning DiT-based models. To tackle this research gap, we propose TerDiT, the first quantization-aware training (QAT) and efficient deployment scheme for extremely low-bit diffusion transformer models. We focus on the ternarization of DiT networks, with model sizes ranging from 600M to 4.2B, and image resolution from 256$\times$256 to 512$\times$512. Our work contributes to the exploration of efficient deployment of large-scale DiT models, demonstrating the feasibility of training extremely low-bit DiT models from scratch while maintaining competitive image generation capacities compared to full-precision models. Our code and pre-trained TerDiT checkpoints have been released at https://github.com/Lucky-Lance/TerDiT." "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,li2023qdiffusion,\cite{li2023qdiffusion},Q-Diffusion: Quantizing Diffusion Models,http://arxiv.org/abs/2302.04304v3,"Diffusion models have achieved great success in image synthesis through iterative noise estimation using deep neural networks. However, the slow inference, high memory consumption, and computation intensity of the noise estimation model hinder the efficient adoption of diffusion models. Although post-training quantization (PTQ) is considered a go-to compression method for other tasks, it does not work out-of-the-box on diffusion models. We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture of the diffusion models, which compresses the noise estimation network to accelerate the generation process. We identify the key difficulty of diffusion model quantization as the changing output distributions of noise estimation networks over multiple time steps and the bimodal activation distribution of the shortcut layers within the noise estimation network. We tackle these challenges with timestep-aware calibration and split shortcut quantization in this work. Experimental results show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance (small FID change of at most 2.34 compared to >100 for traditional PTQ) in a training-free manner. Our approach can also be applied to text-guided image generation, where we can run stable diffusion in 4-bit weights with high generation quality for the first time.",True,True,"Li, Xiuyu and Liu, Yijiang and Lian, Long and Yang, Huanrui and Dong, Zhen and Kang, Daniel and Zhang, Shanghang and Keutzer, Kurt",2023.0,,,,,Q-Diffusion: Quantizing Diffusion Models,Q-Diffusion: Quantizing Diffusion Models,http://arxiv.org/pdf/2302.04304v3,"Diffusion models have achieved great success in image synthesis through iterative noise estimation using deep neural networks. However, the slow inference, high memory consumption, and computation intensity of the noise estimation model hinder the efficient adoption of diffusion models. Although post-training quantization (PTQ) is considered a go-to compression method for other tasks, it does not work out-of-the-box on diffusion models. We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture of the diffusion models, which compresses the noise estimation network to accelerate the generation process. We identify the key difficulty of diffusion model quantization as the changing output distributions of noise estimation networks over multiple time steps and the bimodal activation distribution of the shortcut layers within the noise estimation network. We tackle these challenges with timestep-aware calibration and split shortcut quantization in this work. Experimental results show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance (small FID change of at most 2.34 compared to >100 for traditional PTQ) in a training-free manner. Our approach can also be applied to text-guided image generation, where we can run stable diffusion in 4-bit weights with high generation quality for the first time." "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,shang2023ptq4dm,\cite{shang2023ptq4dm},Post-training Quantization on Diffusion Models,http://arxiv.org/abs/2211.15736v3,"Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. The code is available at https://github.com/42Shawn/PTQ4DM .",True,True,"Shang, Yuzhang and Yuan, Zhihang and Xie, Bin and Wu, Bingzhe and Yan, Yan",2023.0,,,,,Post-training Quantization on Diffusion Models,[2211.15736] Post-training Quantization on Diffusion Models - arXiv,https://arxiv.org/abs/2211.15736,Our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,he2024ptqd,\cite{he2024ptqd},PTQD: Accurate Post-Training Quantization for Diffusion Models,http://arxiv.org/abs/2305.10657v4,"Diffusion models have recently dominated image synthesis tasks. However, the iterative denoising process is expensive in computations at inference time, making diffusion models less practical for low-latency and scalable real-world applications. Post-training quantization (PTQ) of diffusion models can significantly reduce the model size and accelerate the sampling process without re-training. Nonetheless, applying existing PTQ methods directly to low-bit diffusion models can significantly impair the quality of generated samples. Specifically, for each denoising step, quantization noise leads to deviations in the estimated mean and mismatches with the predetermined variance schedule. As the sampling process proceeds, the quantization noise may accumulate, resulting in a low signal-to-noise ratio (SNR) during the later denoising steps. To address these challenges, we propose a unified formulation for the quantization noise and diffusion perturbed noise in the quantized denoising process. Specifically, we first disentangle the quantization noise into its correlated and residual uncorrelated parts regarding its full-precision counterpart. The correlated part can be easily corrected by estimating the correlation coefficient. For the uncorrelated part, we subtract the bias from the quantized results to correct the mean deviation and calibrate the denoising variance schedule to absorb the excess variance resulting from quantization. Moreover, we introduce a mixed-precision scheme for selecting the optimal bitwidth for each denoising step. Extensive experiments demonstrate that our method outperforms previous post-training quantized diffusion models, with only a 0.06 increase in FID score compared to full-precision LDM-4 on ImageNet 256x256, while saving 19.9x bit operations. Code is available at https://github.com/ziplab/PTQD.",True,True,"He, Yefei and Liu, Luping and Liu, Jing and Wu, Weijia and Zhou, Hong and Zhuang, Bohan",2024.0,,,,Advances in Neural Information Processing Systems,PTQD: Accurate Post-Training Quantization for Diffusion Models,PTQD: Accurate Post-Training Quantization for Diffusion Models,https://arxiv.org/abs/2305.10657,Post-training quantization (PTQ) of diffusion models can significantly reduce the model size and accelerate the sampling process without re-training. "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,huang2024tfmq,\cite{huang2024tfmq},Tfmq-dm: Temporal feature maintenance quantization for diffusion models,,,True,False,"Huang, Yushi and Gong, Ruihao and Liu, Jing and Chen, Tianlong and Liu, Xianglong",2024.0,,,,,Tfmq-dm: Temporal feature maintenance quantization for diffusion models,TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models,http://arxiv.org/pdf/2311.16503v3,"The Diffusion model, a prevalent framework for image generation, encounters significant challenges in terms of broad applicability due to its extended inference times and substantial memory requirements. Efficient Post-training Quantization (PTQ) is pivotal for addressing these issues in traditional models. Different from traditional models, diffusion models heavily depend on the time-step $t$ to achieve satisfactory multi-round denoising. Usually, $t$ from the finite set $\{1, \ldots, T\}$ is encoded to a temporal feature by a few modules totally irrespective of the sampling data. However, existing PTQ methods do not optimize these modules separately. They adopt inappropriate reconstruction targets and complex calibration methods, resulting in a severe disturbance of the temporal feature and denoising trajectory, as well as a low compression efficiency. To solve these, we propose a Temporal Feature Maintenance Quantization (TFMQ) framework building upon a Temporal Information Block which is just related to the time-step $t$ and unrelated to the sampling data. Powered by the pioneering block design, we devise temporal information aware reconstruction (TIAR) and finite set calibration (FSC) to align the full-precision temporal features in a limited time. Equipped with the framework, we can maintain the most temporal information and ensure the end-to-end generation quality. Extensive experiments on various datasets and diffusion models prove our state-of-the-art results. Remarkably, our quantization approach, for the first time, achieves model performance nearly on par with the full-precision model under 4-bit weight quantization. Additionally, our method incurs almost no extra computational cost and accelerates quantization time by $2.0 \times$ on LSUN-Bedrooms $256 \times 256$ compared to previous works. Our code is publicly available at https://github.com/ModelTC/TFMQ-DM." "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,wang2024quest,\cite{wang2024quest},"QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning",http://arxiv.org/abs/2402.03666v6,"The practical deployment of diffusion models is still hindered by the high memory and computational overhead. Although quantization paves a way for model compression and acceleration, existing methods face challenges in achieving low-bit quantization efficiently. In this paper, we identify imbalanced activation distributions as a primary source of quantization difficulty, and propose to adjust these distributions through weight finetuning to be more quantization-friendly. We provide both theoretical and empirical evidence supporting finetuning as a practical and reliable solution. Building on this approach, we further distinguish two critical types of quantized layers: those responsible for retaining essential temporal information and those particularly sensitive to bit-width reduction. By selectively finetuning these layers under both local and global supervision, we mitigate performance degradation while enhancing quantization efficiency. Our method demonstrates its efficacy across three high-resolution image generation tasks, obtaining state-of-the-art performance across multiple bit-width settings.",True,True,"Wang, Haoxuan and Shang, Yuzhang and Yuan, Zhihang and Wu, Junyi and Yan, Yan",2024.0,,,,arXiv preprint arXiv:2402.03666,"QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning",Low-bit Diffusion Model Quantization via Efficient Selective Finetuning,https://arxiv.org/abs/2402.03666,"In this paper, we identify imbalanced activation distributions as a primary source of quantization difficulty, and propose to adjust these distributions" "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,he2023efficientdm,\cite{he2023efficientdm},"EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models",http://arxiv.org/abs/2310.03270v4,"Diffusion models have demonstrated remarkable capabilities in image synthesis and related generative tasks. Nevertheless, their practicality for real-world applications is constrained by substantial computational costs and latency issues. Quantization is a dominant way to compress and accelerate diffusion models, where post-training quantization (PTQ) and quantization-aware training (QAT) are two main approaches, each bearing its own properties. While PTQ exhibits efficiency in terms of both time and data usage, it may lead to diminished performance in low bit-width. On the other hand, QAT can alleviate performance degradation but comes with substantial demands on computational and data resources. In this paper, we introduce a data-free and parameter-efficient fine-tuning framework for low-bit diffusion models, dubbed EfficientDM, to achieve QAT-level performance with PTQ-like efficiency. Specifically, we propose a quantization-aware variant of the low-rank adapter (QALoRA) that can be merged with model weights and jointly quantized to low bit-width. The fine-tuning process distills the denoising capabilities of the full-precision model into its quantized counterpart, eliminating the requirement for training data. We also introduce scale-aware optimization and temporal learned step-size quantization to further enhance performance. Extensive experimental results demonstrate that our method significantly outperforms previous PTQ-based diffusion models while maintaining similar time and data efficiency. Specifically, there is only a 0.05 sFID increase when quantizing both weights and activations of LDM-4 to 4-bit on ImageNet 256x256. Compared to QAT-based methods, our EfficientDM also boasts a 16.2x faster quantization speed with comparable generation quality. Code is available at \href{https://github.com/ThisisBillhe/EfficientDM}{this hrl}.",True,True,"He, Yefei and Liu, Jing and Wu, Weijia and Zhou, Hong and Zhuang, Bohan",2023.0,,,,arXiv preprint arXiv:2310.03270,"EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models",Efficient Quantization-Aware Fine-Tuning of Low-Bit ...,https://openreview.net/forum?id=UmMa3UNDAz,"by Y He · Cited by 59 — We introduce a data-free, quantization-aware and parameter-efficient fine-tuning framework for low-bit diffusion models, dubbed EfficientDM, to achieve QAT-" "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,zhao2025mixdq,\cite{zhao2025mixdq},"MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization",http://arxiv.org/abs/2405.17873v2,"Diffusion models have achieved significant visual generation quality. However, their significant computational and memory costs pose challenge for their application on resource-constrained mobile devices or even desktop GPUs. Recent few-step diffusion models reduces the inference time by reducing the denoising steps. However, their memory consumptions are still excessive. The Post Training Quantization (PTQ) replaces high bit-width FP representation with low-bit integer values (INT4/8) , which is an effective and efficient technique to reduce the memory cost. However, when applying to few-step diffusion models, existing quantization methods face challenges in preserving both the image quality and text alignment. To address this issue, we propose an mixed-precision quantization framework - MixDQ. Firstly, We design specialized BOS-aware quantization method for highly sensitive text embedding quantization. Then, we conduct metric-decoupled sensitivity analysis to measure the sensitivity of each layer. Finally, we develop an integer-programming-based method to conduct bit-width allocation. While existing quantization methods fall short at W8A8, MixDQ could achieve W8A8 without performance loss, and W4A8 with negligible visual degradation. Compared with FP16, we achieve 3-4x reduction in model size and memory cost, and 1.45x latency speedup.",True,True,"Zhao, Tianchen and Ning, Xuefei and Fang, Tongcheng and Liu, Enshu and Huang, Guyue and Lin, Zinan and Yan, Shengen and Dai, Guohao and Wang, Yu",2025.0,,,,,"MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization",MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion ...,https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/02212.pdf,"by T Zhao12 · Cited by 29 — MixDQ is a mixed-precision quantization method for few-step text-to-image models, compressing memory by 3.4x without performance loss." "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,chen2024qdit,\cite{chen2024qdit},Q-dit: Accurate post-training quantization for diffusion transformers,,,True,False,"Chen, Lei and Meng, Yuan and Tang, Chen and Ma, Xinzhu and Jiang, Jingyan and Wang, Xin and Wang, Zhi and Zhu, Wenwu",2024.0,,,,arXiv preprint arXiv:2406.17343,Q-dit: Accurate post-training quantization for diffusion transformers,[PDF] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers,https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Q-DiT_Accurate_Post-Training_Quantization_for_Diffusion_Transformers_CVPR_2025_paper.pdf,"Post-Training. Quantization (PTQ) emerges as a promising solution, en- abling model compression and accelerated inference for pretrained models, without the" "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,wu2024ptq4dit,\cite{wu2024ptq4dit},PTQ4DiT: Post-training Quantization for Diffusion Transformers,http://arxiv.org/abs/2405.16005v3,"The recent introduction of Diffusion Transformers (DiTs) has demonstrated exceptional capabilities in image generation by using a different backbone architecture, departing from traditional U-Nets and embracing the scalable nature of transformers. Despite their advanced capabilities, the wide deployment of DiTs, particularly for real-time applications, is currently hampered by considerable computational demands at the inference stage. Post-training Quantization (PTQ) has emerged as a fast and data-efficient solution that can significantly reduce computation and memory footprint by using low-bit weights and activations. However, its applicability to DiTs has not yet been explored and faces non-trivial difficulties due to the unique design of DiTs. In this paper, we propose PTQ4DiT, a specifically designed PTQ method for DiTs. We discover two primary quantization challenges inherent in DiTs, notably the presence of salient channels with extreme magnitudes and the temporal variability in distributions of salient activation over multiple timesteps. To tackle these challenges, we propose Channel-wise Salience Balancing (CSB) and Spearmen's $\rho$-guided Salience Calibration (SSC). CSB leverages the complementarity property of channel magnitudes to redistribute the extremes, alleviating quantization errors for both activations and weights. SSC extends this approach by dynamically adjusting the balanced salience to capture the temporal variations in activation. Additionally, to eliminate extra computational costs caused by PTQ4DiT during inference, we design an offline re-parameterization strategy for DiTs. Experiments demonstrate that our PTQ4DiT successfully quantizes DiTs to 8-bit precision (W8A8) while preserving comparable generation ability and further enables effective quantization to 4-bit weight precision (W4A8) for the first time.",True,True,"Wu, Junyi and Wang, Haoxuan and Shang, Yuzhang and Shah, Mubarak and Yan, Yan",2024.0,,,,arXiv preprint arXiv:2405.16005,PTQ4DiT: Post-training Quantization for Diffusion Transformers,PTQ4DiT: Post-training Quantization for Diffusion Transformers,https://openreview.net/forum?id=NLmAGkN6nn&referrer=%5Bthe%20profile%20of%20Haoxuan%20Wang%5D(%2Fprofile%3Fid%3D~Haoxuan_Wang1),"This paper presents PTQ4DiT, a quantization method designed for diffusion transformers. The method focuses on addressing quantization challenges" "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,li2024svdqunat,\cite{li2024svdqunat},Svdqunat: Absorbing outliers by low-rank components for 4-bit diffusion models,,,True,False,"Li, Muyang and Lin, Yujun and Zhang, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song",2024.0,,,,arXiv preprint arXiv:2411.05007,Svdqunat: Absorbing outliers by low-rank components for 4-bit diffusion models,SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit ...,https://arxiv.org/html/2411.05007v1,SVDQuant is a post-training quantization technique for 4-bit weights and activations that well maintains visual fidelity. "Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers",2505.22167v1,zhao2024vidit,\cite{zhao2024vidit},ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation,,,True,False,"Zhao, Tianchen and Fang, Tongcheng and Liu, Enshu and Rui, Wan and Soedarmadji, Widyadewi and Li, Shiyao and Lin, Zinan and Dai, Guohao and Yan, Shengen and Yang, Huazhong and others",2024.0,,,,arXiv preprint arXiv:2406.02540,ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation,ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation,http://arxiv.org/pdf/2406.02540v3,"Diffusion transformers have demonstrated remarkable performance in visual generation tasks, such as generating realistic images or videos based on textual instructions. However, larger model sizes and multi-frame processing for video generation lead to increased computational and memory costs, posing challenges for practical deployment on edge devices. Post-Training Quantization (PTQ) is an effective method for reducing memory costs and computational complexity. When quantizing diffusion transformers, we find that existing quantization methods face challenges when applied to text-to-image and video tasks. To address these challenges, we begin by systematically analyzing the source of quantization error and conclude with the unique challenges posed by DiT quantization. Accordingly, we design an improved quantization scheme: ViDiT-Q (Video & Image Diffusion Transformer Quantization), tailored specifically for DiT models. We validate the effectiveness of ViDiT-Q across a variety of text-to-image and video models, achieving W8A8 and W4A8 with negligible degradation in visual quality and metrics. Additionally, we implement efficient GPU kernels to achieve practical 2-2.5x memory saving and a 1.4-1.7x end-to-end latency speedup." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,fever,\cite{fever},FEVER: a large-scale dataset for Fact Extraction and VERification,http://arxiv.org/abs/1803.05355v3,"In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo by annotators achieving 0.6841 in Fleiss $\kappa$. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.",True,True,"James Thorne and Andreas Vlachos and Christos Christodoulopoulos and Arpit Mittal",2018.0,,https://doi.org/10.18653/v1/n18-1074,10.18653/V1/N18-1074,,FEVER: a large-scale dataset for Fact Extraction and VERification,FEVER: a Large-scale Dataset for Fact Extraction and ...,https://aclanthology.org/N18-1074/,"by J Thorne · 2018 · Cited by 2060 — In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,faviq,\cite{faviq},{F}a{VIQ}: {FA}ct Verification from Information-seeking Questions,,,True,False,"Park, Jungsoo and Min, Sewon and Kang, Jaewoo and Zettlemoyer, Luke and Hajishirzi, Hannaneh",2022.0,,https://aclanthology.org/2022.acl-long.354/,10.18653/v1/2022.acl-long.354,,{F}a{VIQ}: {FA}ct Verification from Information-seeking Questions,FAVIQ: FAct Verification from Information-seeking Questions,https://aclanthology.org/2022.acl-long.354.pdf,by J Park · 2022 · Cited by 39 — We construct a fact verification dataset from highly ambiguous information-seeking questions. Our claims have significantly less lexical bias "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,vitamin-c,\cite{vitamin-c},Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence,http://arxiv.org/abs/2103.08541v1,"Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness -- improving accuracy by 10% on adversarial fact verification and 6% on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",True,True,"Schuster, Tal and Fisch, Adam and Barzilay, Regina",2021.0,,https://aclanthology.org/2021.naacl-main.52/,10.18653/v1/2021.naacl-main.52,,Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence,Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence,http://arxiv.org/pdf/2103.08541v1,"Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness -- improving accuracy by 10% on adversarial fact verification and 6% on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,hover,\cite{hover},HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification,http://arxiv.org/abs/2011.03088v2,"We introduce HoVer (HOppy VERification), a dataset for many-hop evidence extraction and fact verification. It challenges models to extract facts from several Wikipedia articles that are relevant to a claim and classify whether the claim is Supported or Not-Supported by the facts. In HoVer, the claims require evidence to be extracted from as many as four English Wikipedia articles and embody reasoning graphs of diverse shapes. Moreover, most of the 3/4-hop claims are written in multiple sentences, which adds to the complexity of understanding long-range dependency relations such as coreference. We show that the performance of an existing state-of-the-art semantic-matching model degrades significantly on our dataset as the number of reasoning hops increases, hence demonstrating the necessity of many-hop reasoning to achieve strong results. We hope that the introduction of this challenging dataset and the accompanying evaluation task will encourage research in many-hop fact retrieval and information verification. We make the HoVer dataset publicly available at https://hover-nlp.github.io",True,True,"Yichen Jiang and Shikha Bordia and Zheng Zhong and Charles Dognin and Maneesh Kumar Singh and Mohit Bansal",2020.0,,https://doi.org/10.18653/v1/2020.findings-emnlp.309,10.18653/V1/2020.FINDINGS-EMNLP.309,,HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification,HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification,https://arxiv.org/abs/2011.03088,"We introduce HoVer (HOppy VERification), a dataset for many-hop evidence extraction and fact verification. It challenges models to extract facts from several" "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,graph-review,\cite{graph-review},Graph Neural Networks: A Review of Methods and Applications,http://arxiv.org/abs/1812.08434v6,"Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics systems, learning molecular fingerprints, predicting protein interface, and classifying diseases demand a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures (like the dependency trees of sentences and the scene graphs of images) is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between the nodes of graphs. In recent years, variants of GNNs such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking performances on many deep learning tasks. In this survey, we propose a general design pipeline for GNN models and discuss the variants of each component, systematically categorize the applications, and propose four open problems for future research.",True,True,"Jie Zhou and Ganqu Cui and Shengding Hu and Zhengyan Zhang and Cheng Yang and Zhiyuan Liu and Lifeng Wang and Changcheng Li and Maosong Sun",2020.0,,https://doi.org/10.1016/j.aiopen.2021.01.001,10.1016/J.AIOPEN.2021.01.001,{AI} Open,Graph Neural Networks: A Review of Methods and Applications,Graph Neural Networks: A Review of Methods and Applications,http://arxiv.org/pdf/1812.08434v6,"Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics systems, learning molecular fingerprints, predicting protein interface, and classifying diseases demand a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures (like the dependency trees of sentences and the scene graphs of images) is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between the nodes of graphs. In recent years, variants of GNNs such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking performances on many deep learning tasks. In this survey, we propose a general design pipeline for GNN models and discuss the variants of each component, systematically categorize the applications, and propose four open problems for future research." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,tapas,\cite{tapas},TAPAS: Weakly Supervised Table Parsing via Pre-training,http://arxiv.org/abs/2004.02349v2,"Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.",True,True,"Herzig, Jonathan and Nowak, Pawel Krzysztof and M{\""u}ller, Thomas and Piccinno, Francesco and Eisenschlos, Julian",2020.0,,https://aclanthology.org/2020.acl-main.398/,10.18653/v1/2020.acl-main.398,,TAPAS: Weakly Supervised Table Parsing via Pre-training,TaPas: Weakly Supervised Table Parsing via Pre-training,https://aclanthology.org/2020.acl-main.398/,"by J Herzig · 2020 · Cited by 784 — TaPas trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such" "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,rat-sql,\cite{rat-sql},{RAT-SQL}: Relation-Aware Schema Encoding and Linking for Text-to-{SQL} Parsers,,,True,False,"Wang, Bailin and Shin, Richard and Liu, Xiaodong and Polozov, Oleksandr and Richardson, Matthew",2020.0,,https://aclanthology.org/2020.acl-main.677/,10.18653/v1/2020.acl-main.677,,{RAT-SQL}: Relation-Aware Schema Encoding and Linking for Text-to-{SQL} Parsers,RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to ...,https://arxiv.org/abs/1911.04942,"View a PDF of the paper titled RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers, by Bailin Wang and 4 other authors." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,programfc,\cite{programfc},Fact-Checking Complex Claims with Program-Guided Reasoning,http://arxiv.org/abs/2305.12744v1,"Fact-checking real-world claims often requires collecting multiple pieces of evidence and applying complex multi-step reasoning. In this paper, we present Program-Guided Fact-Checking (ProgramFC), a novel fact-checking model that decomposes complex claims into simpler sub-tasks that can be solved using a shared library of specialized functions. We first leverage the in-context learning ability of large language models to generate reasoning programs to guide the verification process. Afterward, we execute the program by delegating each sub-task to the corresponding sub-task handler. This process makes our model both explanatory and data-efficient, providing clear explanations of its reasoning process and requiring minimal training data. We evaluate ProgramFC on two challenging fact-checking datasets and show that it outperforms seven fact-checking baselines across different settings of evidence availability, with explicit output programs that benefit human debugging. Our codes and data are publicly available at https://github.com/mbzuai-nlp/ProgramFC.",True,True,"Liangming Pan and Xiaobao Wu and Xinyuan Lu and Anh Tuan Luu and William Yang Wang and Min{-}Yen Kan and Preslav Nakov",2023.0,,https://doi.org/10.18653/v1/2023.acl-long.386,10.18653/V1/2023.ACL-LONG.386,,Fact-Checking Complex Claims with Program-Guided Reasoning,Fact-Checking Complex Claims with Program-Guided ...,https://aclanthology.org/2023.acl-long.386/,by L Pan · 2023 · Cited by 158 — A novel fact-checking model that decomposes complex claims into simpler sub-tasks that can be solved using a shared library of specialized functions.See more "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,folk,\cite{folk},"Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models",http://arxiv.org/abs/2310.05253v2,"Claim verification plays a crucial role in combating misinformation. While existing works on claim verification have shown promising results, a crucial piece of the puzzle that remains unsolved is to understand how to verify claims without relying on human-annotated data, which is expensive to create at a large scale. Additionally, it is important for models to provide comprehensive explanations that can justify their decisions and assist human fact-checkers. This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning that can verify complex claims and generate explanations without the need for annotated evidence using Large Language Models (LLMs). FOLK leverages the in-context learning ability of LLMs to translate the claim into a First-Order-Logic (FOL) clause consisting of predicates, each corresponding to a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning over a set of knowledge-grounded question-and-answer pairs to make veracity predictions and generate explanations to justify its decision-making process. This process makes our model highly explanatory, providing clear explanations of its reasoning process in human-readable form. Our experiment results indicate that FOLK outperforms strong baselines on three datasets encompassing various claim verification challenges. Our code and data are available.",True,True,"Haoran Wang and Kai Shu",2023.0,,https://doi.org/10.18653/v1/2023.findings-emnlp.416,10.18653/V1/2023.FINDINGS-EMNLP.416,,"Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models",[PDF] Explainable Claim Verification via Knowledge-Grounded Reasoning ...,https://aclanthology.org/2023.findings-emnlp.416.pdf,"FOLK uses LLMs to translate claims into First-Order Logic, then uses knowledge-grounded reasoning to verify claims and generate explanations." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,factkg,\cite{factkg},FactKG: Fact Verification via Reasoning on Knowledge Graphs,http://arxiv.org/abs/2305.06590v2,"In real world applications, knowledge graphs (KG) are widely used in various domains (e.g. medical applications and dialogue agents). However, for fact verification, KGs have not been adequately utilized as a knowledge source. KGs can be a valuable knowledge source in fact verification due to their reliability and broad applicability. A KG consists of nodes and edges which makes it clear how concepts are linked together, allowing machines to reason over chains of topics. However, there are many challenges in understanding how these machine-readable concepts map to information in text. To enable the community to better use KGs, we introduce a new dataset, FactKG: Fact Verification via Reasoning on Knowledge Graphs. It consists of 108k natural language claims with five types of reasoning: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FactKG contains various linguistic patterns, including colloquial style claims as well as written style claims to increase practicality. Lastly, we develop a baseline approach and analyze FactKG over these reasoning types. We believe FactKG can advance both reliability and practicality in KG-based fact verification.",True,True,"Jiho Kim and Sungjin Park and Yeonsu Kwon and Yohan Jo and James Thorne and Edward Choi",2023.0,,https://doi.org/10.18653/v1/2023.acl-long.895,10.18653/V1/2023.ACL-LONG.895,,FactKG: Fact Verification via Reasoning on Knowledge Graphs,FactKG: Fact Verification via Reasoning on Knowledge Graphs,http://arxiv.org/pdf/2305.06590v2,"In real world applications, knowledge graphs (KG) are widely used in various domains (e.g. medical applications and dialogue agents). However, for fact verification, KGs have not been adequately utilized as a knowledge source. KGs can be a valuable knowledge source in fact verification due to their reliability and broad applicability. A KG consists of nodes and edges which makes it clear how concepts are linked together, allowing machines to reason over chains of topics. However, there are many challenges in understanding how these machine-readable concepts map to information in text. To enable the community to better use KGs, we introduce a new dataset, FactKG: Fact Verification via Reasoning on Knowledge Graphs. It consists of 108k natural language claims with five types of reasoning: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FactKG contains various linguistic patterns, including colloquial style claims as well as written style claims to increase practicality. Lastly, we develop a baseline approach and analyze FactKG over these reasoning types. We believe FactKG can advance both reliability and practicality in KG-based fact verification." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,kg_gpt,\cite{kg_gpt},"{KG-GPT:} {A} General Framework for Reasoning on Knowledge Graphs Using Large Language Models",,,True,False,"Jiho Kim and Yeonsu Kwon and Yohan Jo and Edward Choi",2023.0,,https://doi.org/10.18653/v1/2023.findings-emnlp.631,10.18653/V1/2023.FINDINGS-EMNLP.631,,"{KG-GPT:} {A} General Framework for Reasoning on Knowledge Graphs Using Large Language Models",KG-GPT: A General Framework for Reasoning on Knowledge ...,https://www.researchgate.net/publication/376404206_KG-GPT_A_General_Framework_for_Reasoning_on_Knowledge_Graphs_Using_Large_Language_Models,"Recently, Large Language Models (LLMs) have shown remarkable proficiency, prompting growing interest in AQA among researchers.GraphLLM: A General Framework for Multi-hop Question Answering over Knowledge Graphs Using Large Language Models ." "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,struct-gpt,\cite{struct-gpt},{S}truct{GPT}: A General Framework for Large Language Model to Reason over Structured Data,,,True,False,"Jiang, Jinhao and Zhou, Kun and Dong, Zican and Ye, Keming and Zhao, Xin and Wen, Ji-Rong",2023.0,,https://aclanthology.org/2023.emnlp-main.574/,10.18653/v1/2023.emnlp-main.574,,{S}truct{GPT}: A General Framework for Large Language Model to Reason over Structured Data,StructGPT: A General Framework for Large Language Model ... - arXiv,https://arxiv.org/abs/2305.09645,"View a PDF of the paper titled StructGPT: A General Framework for Large Language Model to Reason over Structured Data, by Jinhao Jiang and 4 other authors > Abstract:In this paper, we study how to improve the zero-shot reasoning ability of large language models~(LLMs) over structured data in a unified way. View a PDF of the paper titled StructGPT: A General Framework for Large Language Model to Reason over Structured Data, by Jinhao Jiang and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " "ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM",2505.22552v1,reasoningongraph,\cite{reasoningongraph},"Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning",http://arxiv.org/abs/2310.01061v2,"Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks. However, they lack up-to-date knowledge and experience hallucinations during reasoning, which can lead to incorrect reasoning processes and diminish their performance and trustworthiness. Knowledge graphs (KGs), which capture vast amounts of facts in a structured format, offer a reliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM reasoning methods only treat KGs as factual knowledge bases and overlook the importance of their structural information for reasoning. In this paper, we propose a novel method called reasoning on graphs (RoG) that synergizes LLMs with KGs to enable faithful and interpretable reasoning. Specifically, we present a planning-retrieval-reasoning framework, where RoG first generates relation paths grounded by KGs as faithful plans. These plans are then used to retrieve valid reasoning paths from the KGs for LLMs to conduct faithful reasoning. Furthermore, RoG not only distills knowledge from KGs to improve the reasoning ability of LLMs through training but also allows seamless integration with any arbitrary LLMs during inference. Extensive experiments on two benchmark KGQA datasets demonstrate that RoG achieves state-of-the-art performance on KG reasoning tasks and generates faithful and interpretable reasoning results.",True,True,"Linhao Luo and Yuan{-}Fang Li and Gholamreza Haffari and Shirui Pan",2024.0,,https://openreview.net/forum?id=ZGNWW7xZ6Q,,,"Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning",Faithful and Interpretable Large Language Model Reasoning,https://arxiv.org/abs/2310.01061,"**arXiv:2310.01061** (cs) View a PDF of the paper titled Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning, by Linhao Luo and 3 other authors (or arXiv:2310.01061v2 [cs.CL] for this version) View a PDF of the paper titled Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning, by Linhao Luo and 3 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/eurosys/NarayanFPH15,\cite{DBLP:conf/eurosys/NarayanFPH15},Verifiable Differential Privacy,http://arxiv.org/abs/2208.09011v2,"Differential Privacy (DP) is often presented as a strong privacy-enhancing technology with broad applicability and advocated as a de-facto standard for releasing aggregate statistics on sensitive data. However, in many embodiments, DP introduces a new attack surface: a malicious entity entrusted with releasing statistics could manipulate the results and use the randomness of DP as a convenient smokescreen to mask its nefariousness. Since revealing the random noise would obviate the purpose of introducing it, the miscreant may have a perfect alibi. To close this loophole, we introduce the idea of \textit{Verifiable Differential Privacy}, which requires the publishing entity to output a zero-knowledge proof that convinces an efficient verifier that the output is both DP and reliable. Such a definition might seem unachievable, as a verifier must validate that DP randomness was generated faithfully without learning anything about the randomness itself. We resolve this paradox by carefully mixing private and public randomness to compute verifiable DP counting queries with theoretical guarantees and show that it is also practical for real-world deployment. We also demonstrate that computational assumptions are necessary by showing a separation between information-theoretic DP and computational DP under our definition of verifiability.",True,True,"Arjun Narayan and Ariel Feldman and Antonis Papadimitriou and Andreas Haeberlen",2015.0,,https://doi.org/10.1145/2741948.2741978,10.1145/2741948.2741978,,Verifiable Differential Privacy,Verifiable Differential Privacy,http://arxiv.org/pdf/2208.09011v2,"Differential Privacy (DP) is often presented as a strong privacy-enhancing technology with broad applicability and advocated as a de-facto standard for releasing aggregate statistics on sensitive data. However, in many embodiments, DP introduces a new attack surface: a malicious entity entrusted with releasing statistics could manipulate the results and use the randomness of DP as a convenient smokescreen to mask its nefariousness. Since revealing the random noise would obviate the purpose of introducing it, the miscreant may have a perfect alibi. To close this loophole, we introduce the idea of \textit{Verifiable Differential Privacy}, which requires the publishing entity to output a zero-knowledge proof that convinces an efficient verifier that the output is both DP and reliable. Such a definition might seem unachievable, as a verifier must validate that DP randomness was generated faithfully without learning anything about the randomness itself. We resolve this paradox by carefully mixing private and public randomness to compute verifiable DP counting queries with theoretical guarantees and show that it is also practical for real-world deployment. We also demonstrate that computational assumptions are necessary by showing a separation between information-theoretic DP and computational DP under our definition of verifiability." "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,dprio,\cite{dprio},DPrio: Efficient Differential Privacy with High Utility for Prio,,,True,False,"Dana Keeler and Chelsea Komlo and Emily Lepert and Shannon Veitch and Xi He",2023.0,,https://doi.org/10.56553/popets-2023-0086,10.56553/POPETS-2023-0086,Proc. Priv. Enhancing Technol.,DPrio: Efficient Differential Privacy with High Utility for Prio,DPrio: Efficient Differential Privacy with High Utility for Prio,https://petsymposium.org/popets/2023/popets-2023-0086.php,We present a lightweight method that we call DPrio to augment Prio and related systems with differential privacy assurances while ensuring higher data utility. "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,KCY21,\cite{KCY21},"Preventing Manipulation Attack in Local Differential Privacy using Verifiable Randomization Mechanism",http://arxiv.org/abs/2104.06569v2,"Several randomization mechanisms for local differential privacy (LDP) (e.g., randomized response) are well-studied to improve the utility. However, recent studies show that LDP is generally vulnerable to malicious data providers in nature. Because a data collector has to estimate background data distribution only from already randomized data, malicious data providers can manipulate their output before sending, i.e., randomization would provide them plausible deniability. Attackers can skew the estimations effectively since they are calculated by normalizing with randomization probability defined in the LDP protocol, and can even control the estimations. In this paper, we show how we prevent malicious attackers from compromising LDP protocol. Our approach is to utilize a verifiable randomization mechanism. The data collector can verify the completeness of executing an agreed randomization mechanism for every data provider. Our proposed method completely protects the LDP protocol from output-manipulations, and significantly mitigates the expected damage from attacks. We do not assume any specific attacks, and it works effectively against general output-manipulation, and thus is more powerful than previously proposed countermeasures. We describe the secure version of three state-of-the-art LDP protocols and empirically show they cause acceptable overheads according to several parameters.",True,True,"Fumiyuki Kato and Yang Cao and Masatoshi Yoshikawa",2021.0,,https://doi.org/10.1007/978-3-030-81242-3\_3,10.1007/978-3-030-81242-3\_3,,"Preventing Manipulation Attack in Local Differential Privacy using Verifiable Randomization Mechanism",Preventing Manipulation Attack in Local Differential Privacy ...,https://inria.hal.science/hal-03677038v1,"In this paper, we propose secure and efficient verifiable LDP protocols to prevent manipulation attacks. Specifically, we leverage Cryptographic Randomized" "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/iclr/ShamsabadiTCBHP24,\cite{DBLP:conf/iclr/ShamsabadiTCBHP24},"Confidential-DPproof: Confidential Proof of Differentially Private Training",,,True,False,"Ali Shahin Shamsabadi and Gefei Tan and Tudor Cebere and Aur{\'{e}}lien Bellet and Hamed Haddadi and Nicolas Papernot and Xiao Wang and Adrian Weller",2024.0,,https://openreview.net/forum?id=PQY2v6VtGe,,,"Confidential-DPproof: Confidential Proof of Differentially Private Training",[PDF] Confidential-DPproof - OpenReview,https://openreview.net/pdf?id=PQY2v6VtGe,"We introduce Confidential-. DPproof, a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the (ε" "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,BC23,\cite{BC23},Interactive Proofs For Differentially Private Counting,,,True,False,"Ari Biswas and Graham Cormode",2023.0,,https://doi.org/10.1145/3576915.3616681,10.1145/3576915.3616681,,Interactive Proofs For Differentially Private Counting,Interactive Proofs For Differentially Private Counting,https://dl.acm.org/doi/10.1145/3576915.3616681,"We introduce the idea of Interactive Proofs For Differential Privacy, which requires the publishing entity to output a zero knowledge proof." "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/pkc/AmbainisJL04,\cite{DBLP:conf/pkc/AmbainisJL04},Cryptographic Randomized Response Techniques,http://arxiv.org/abs/cs/0302025v2,"We develop cryptographically secure techniques to guarantee unconditional privacy for respondents to polls. Our constructions are efficient and practical, and are shown not to allow cheating respondents to affect the ``tally'' by more than their own vote -- which will be given the exact same weight as that of other respondents. We demonstrate solutions to this problem based on both traditional cryptographic techniques and quantum cryptography.",True,True,"Andris Ambainis and Markus Jakobsson and Helger Lipmaa",2004.0,,https://doi.org/10.1007/978-3-540-24632-9\_31,10.1007/978-3-540-24632-9\_31,,Cryptographic Randomized Response Techniques,Cryptographic Randomized Response Techniques,http://arxiv.org/pdf/cs/0302025v2,"We develop cryptographically secure techniques to guarantee unconditional privacy for respondents to polls. Our constructions are efficient and practical, and are shown not to allow cheating respondents to affect the ``tally'' by more than their own vote -- which will be given the exact same weight as that of other respondents. We demonstrate solutions to this problem based on both traditional cryptographic techniques and quantum cryptography." "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/sp/BonehBCGI21,\cite{DBLP:conf/sp/BonehBCGI21},Lightweight Techniques for Private Heavy Hitters,http://arxiv.org/abs/2012.14884v5,"This paper presents Poplar, a new system for solving the private heavy-hitters problem. In this problem, there are many clients and a small set of data-collection servers. Each client holds a private bitstring. The servers want to recover the set of all popular strings, without learning anything else about any client's string. A web-browser vendor, for instance, can use Poplar to figure out which homepages are popular, without learning any user's homepage. We also consider the simpler private subset-histogram problem, in which the servers want to count how many clients hold strings in a particular set without revealing this set to the clients. Poplar uses two data-collection servers and, in a protocol run, each client send sends only a single message to the servers. Poplar protects client privacy against arbitrary misbehavior by one of the servers and our approach requires no public-key cryptography (except for secure channels), nor general-purpose multiparty computation. Instead, we rely on incremental distributed point functions, a new cryptographic tool that allows a client to succinctly secret-share the labels on the nodes of an exponentially large binary tree, provided that the tree has a single non-zero path. Along the way, we develop new general tools for providing malicious security in applications of distributed point functions.",True,True,"Dan Boneh and Elette Boyle and Henry Corrigan{-}Gibbs and Niv Gilboa and Yuval Ishai",2021.0,,https://doi.org/10.1109/SP40001.2021.00048,10.1109/SP40001.2021.00048,,Lightweight Techniques for Private Heavy Hitters,Lightweight Techniques for Private Heavy Hitters,http://arxiv.org/pdf/2012.14884v5,"This paper presents Poplar, a new system for solving the private heavy-hitters problem. In this problem, there are many clients and a small set of data-collection servers. Each client holds a private bitstring. The servers want to recover the set of all popular strings, without learning anything else about any client's string. A web-browser vendor, for instance, can use Poplar to figure out which homepages are popular, without learning any user's homepage. We also consider the simpler private subset-histogram problem, in which the servers want to count how many clients hold strings in a particular set without revealing this set to the clients. Poplar uses two data-collection servers and, in a protocol run, each client send sends only a single message to the servers. Poplar protects client privacy against arbitrary misbehavior by one of the servers and our approach requires no public-key cryptography (except for secure channels), nor general-purpose multiparty computation. Instead, we rely on incremental distributed point functions, a new cryptographic tool that allows a client to succinctly secret-share the labels on the nodes of an exponentially large binary tree, provided that the tree has a single non-zero path. Along the way, we develop new general tools for providing malicious security in applications of distributed point functions." "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/sigmod/ChowdhuryW0MJ20,\cite{DBLP:conf/sigmod/ChowdhuryW0MJ20},"Crypt$ε$: Crypto-Assisted Differential Privacy on Untrusted Servers",http://arxiv.org/abs/1902.07756v5,"Differential privacy (DP) has steadily become the de-facto standard for achieving privacy in data analysis, which is typically implemented either in the ""central"" or ""local"" model. The local model has been more popular for commercial deployments as it does not require a trusted data collector. This increased privacy, however, comes at a cost of utility and algorithmic expressibility as compared to the central model. In this work, we propose, Crypt$\epsilon$, a system and programming framework that (1) achieves the accuracy guarantees and algorithmic expressibility of the central model (2) without any trusted data collector like in the local model. Crypt$\epsilon$ achieves the ""best of both worlds"" by employing two non-colluding untrusted servers that run DP programs on encrypted data from the data owners. Although straightforward implementations of DP programs using secure computation tools can achieve the above goal theoretically, in practice they are beset with many challenges such as poor performance and tricky security proofs. To this end, Crypt$\epsilon$ allows data analysts to author logical DP programs that are automatically translated to secure protocols that work on encrypted data. These protocols ensure that the untrusted servers learn nothing more than the noisy outputs, thereby guaranteeing DP (for computationally bounded adversaries) for all Crypt$\epsilon$ programs. Crypt$\epsilon$ supports a rich class of DP programs that can be expressed via a small set of transformation and measurement operators followed by arbitrary post-processing. Further, we propose performance optimizations leveraging the fact that the output is noisy. We demonstrate Crypt$\epsilon$'s feasibility for practical DP analysis with extensive empirical evaluations on real datasets.",True,True,"Amrita Roy Chowdhury and Chenghong Wang and Xi He and Ashwin Machanavajjhala and Somesh Jha",2020.0,,https://doi.org/10.1145/3318464.3380596,10.1145/3318464.3380596,,"Crypt$ε$: Crypto-Assisted Differential Privacy on Untrusted Servers",Crypt$ε$: Crypto-Assisted Differential Privacy on Untrusted Servers,https://arxiv.org/abs/1902.07756,Crypt\epsilon allows data analysts to author logical DP programs that are automatically translated to secure protocols that work on encrypted data. "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/ccs/BellBGL020,\cite{DBLP:conf/ccs/BellBGL020},Secure Single-Server Aggregation with (Poly)Logarithmic Overhead,,,True,False,"James Henry Bell and Kallista A. Bonawitz and Adri{\`{a}} Gasc{\'{o}}n and Tancr{\`{e}}de Lepoint and Mariana Raykova",2020.0,,https://doi.org/10.1145/3372297.3417885,10.1145/3372297.3417885,,Secure Single-Server Aggregation with (Poly)Logarithmic Overhead,Secure Single-Server Aggregation with (Poly)Logarithmic Overhead,https://eprint.iacr.org/2020/704,We present the first constructions for secure aggregation that achieve polylogarithmic communication and computation per client. "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/eurocrypt/DworkKMMN06,\cite{DBLP:conf/eurocrypt/DworkKMMN06},"Our Data, Ourselves: Privacy Via Distributed Noise Generation",,,True,False,"Cynthia Dwork and Krishnaram Kenthapadi and Frank McSherry and Ilya Mironov and Moni Naor",2006.0,,https://doi.org/10.1007/11761679\_29,10.1007/11761679\_29,,"Our Data, Ourselves: Privacy Via Distributed Noise Generation","[PDF] Our Data, Ourselves: Privacy via Distributed Noise Generation - IACR",https://iacr.org/archive/eurocrypt2006/40040493/40040493.pdf,"Abstract. In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose" "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/ccs/ChampionSU19,\cite{DBLP:conf/ccs/ChampionSU19},Securely Sampling Biased Coins with Applications to Differential Privacy,,,True,False,"Jeffrey Champion and Abhi Shelat and Jonathan R. Ullman",2019.0,,https://doi.org/10.1145/3319535.3354256,10.1145/3319535.3354256,,Securely Sampling Biased Coins with Applications to Differential Privacy,Securely Sampling Biased Coins with Applications to ...,https://www.cs.utexas.edu/~jchamps/Slides/SecurelySampling.pdf,"by J Champion · Cited by 37 — Securely Sampling Biased Coins with. Applications to Differential Privacy. Jeffrey Champion, abhi shelat, Jonathan Ullman. Northeastern University. Page 2" "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/uss/BohlerK20,\cite{DBLP:conf/uss/BohlerK20},Secure Multi-party Computation of Differentially Private Median,,,True,False,"Jonas B{\""{o}}hler and Florian Kerschbaum",2020.0,,https://www.usenix.org/conference/usenixsecurity20/presentation/boehler,,,Secure Multi-party Computation of Differentially Private Median,[PDF] Secure Multi-party Computation of Differentially Private Median,https://www.usenix.org/system/files/sec20-bohler.pdf,"In the following, we introduce preliminaries for differential privacy and secure multi-party computation. We consider a set of input parties P =" "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/ccs/BohlerK21,\cite{DBLP:conf/ccs/BohlerK21},Secure Multi-party Computation of Differentially Private Heavy Hitters,,,True,False,"Jonas B{\""{o}}hler and Florian Kerschbaum",2021.0,,https://doi.org/10.1145/3460120.3484557,10.1145/3460120.3484557,,Secure Multi-party Computation of Differentially Private Heavy Hitters,Secure Multi-party Computation of Differentially Private Heavy ...,https://dl.acm.org/doi/10.1145/3460120.3484557,* Zhang Y Ye Q Hu H(2025)Federated Heavy Hitter Analytics with Local Differential Privacy Proceedings of the ACM on Management of Data 10.1145/3709739**3**:1(1-27)Online publication date: 11-Feb-2025https://dl.acm.org/doi/10.1145/3709739 * Fu Y Wang T Luo B Liao X Xu J Kirda E Lie D(2024)Benchmarking Secure Sampling Protocols for Differential Privacy Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security 10.1145/3658644.3690257(318-332)Online publication date: 2-Dec-2024https://dl.acm.org/doi/10.1145/3658644.3690257 * Tong W Chen H Niu J Zhong S Luo B Liao X Xu J Kirda E Lie D(2024)Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security 10.1145/3658644.3670298(3555-3569)Online publication date: 2-Dec-2024https://dl.acm.org/doi/10.1145/3658644.3670298 "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:journals/corr/abs-2109-10074,\cite{DBLP:journals/corr/abs-2109-10074},"{STAR:} Distributed Secret Sharing for Private Threshold Aggregation Reporting",,,True,False,"Alex Davidson and Peter Snyder and E. B. Quirk and Joseph Genereux and Benjamin Livshits",2021.0,,https://arxiv.org/abs/2109.10074,,CoRR,"{STAR:} Distributed Secret Sharing for Private Threshold Aggregation Reporting",draft-dss-star-02 - STAR: Distributed Secret Sharing for ...,https://datatracker.ietf.org/doc/draft-dss-star/,"In this document we describe STAR, an efficient and secure threshold aggregation protocol for collecting measurements from clients by an untrusted aggregation" "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/ccs/WeiYFCW23,\cite{DBLP:conf/ccs/WeiYFCW23},"Securely Sampling Discrete Gaussian Noise for Multi-Party Differential Privacy",,,True,False,"Chengkun Wei and Ruijing Yu and Yuan Fan and Wenzhi Chen and Tianhao Wang",2023.0,,https://doi.org/10.1145/3576915.3616641,10.1145/3576915.3616641,,"Securely Sampling Discrete Gaussian Noise for Multi-Party Differential Privacy",Securely Sampling Discrete Gaussian Noise for Multi-Party ...,https://dl.acm.org/doi/10.1145/3576915.3616641,"Our work presents the first MPC solution for sampling discrete Gaussian, a common type of noise used for constructing DP mechanisms, which plays nicely with" "VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup",2504.21752v1,DBLP:conf/ccs/FuW24,\cite{DBLP:conf/ccs/FuW24},Benchmarking Secure Sampling Protocols for Differential Privacy,http://arxiv.org/abs/2409.10667v2,"Differential privacy (DP) is widely employed to provide privacy protection for individuals by limiting information leakage from the aggregated data. Two well-known models of DP are the central model and the local model. The former requires a trustworthy server for data aggregation, while the latter requires individuals to add noise, significantly decreasing the utility of aggregated results. Recently, many studies have proposed to achieve DP with Secure Multi-party Computation (MPC) in distributed settings, namely, the distributed model, which has utility comparable to central model while, under specific security assumptions, preventing parties from obtaining others' information. One challenge of realizing DP in distributed model is efficiently sampling noise with MPC. Although many secure sampling methods have been proposed, they have different security assumptions and isolated theoretical analyses. There is a lack of experimental evaluations to measure and compare their performances. We fill this gap by benchmarking existing sampling protocols in MPC and performing comprehensive measurements of their efficiency. First, we present a taxonomy of the underlying techniques of these sampling protocols. Second, we extend widely used distributed noise generation protocols to be resilient against Byzantine attackers. Third, we implement discrete sampling protocols and align their security settings for a fair comparison. We then conduct an extensive evaluation to study their efficiency and utility.",True,True,"Yucheng Fu and Tianhao Wang",2024.0,,https://doi.org/10.1145/3658644.3690257,10.1145/3658644.3690257,,Benchmarking Secure Sampling Protocols for Differential Privacy,Benchmarking Secure Sampling Protocols for Differential Privacy,http://arxiv.org/pdf/2409.10667v2,"Differential privacy (DP) is widely employed to provide privacy protection for individuals by limiting information leakage from the aggregated data. Two well-known models of DP are the central model and the local model. The former requires a trustworthy server for data aggregation, while the latter requires individuals to add noise, significantly decreasing the utility of aggregated results. Recently, many studies have proposed to achieve DP with Secure Multi-party Computation (MPC) in distributed settings, namely, the distributed model, which has utility comparable to central model while, under specific security assumptions, preventing parties from obtaining others' information. One challenge of realizing DP in distributed model is efficiently sampling noise with MPC. Although many secure sampling methods have been proposed, they have different security assumptions and isolated theoretical analyses. There is a lack of experimental evaluations to measure and compare their performances. We fill this gap by benchmarking existing sampling protocols in MPC and performing comprehensive measurements of their efficiency. First, we present a taxonomy of the underlying techniques of these sampling protocols. Second, we extend widely used distributed noise generation protocols to be resilient against Byzantine attackers. Third, we implement discrete sampling protocols and align their security settings for a fair comparison. We then conduct an extensive evaluation to study their efficiency and utility." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,TabelDiscovery,\cite{TabelDiscovery},Table Discovery in Data Lakes: State-of-the-art and Future Directions,,,True,False,"Grace Fan and Jin Wang and Yuliang Li and Ren{\'{e}}e J. Miller",2023.0,,,,,Table Discovery in Data Lakes: State-of-the-art and Future Directions,Table Discovery in Data Lakes: State-of-the-art and Future Directions,https://dl.acm.org/doi/pdf/10.1145/3555041.3589409,"We will cover table understanding tasks such as domain discov- ery, table annotation, and table representation learning which help data lake" "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,DataLake_Survey,\cite{DataLake_Survey},Data Lakes: A Survey of Functions and Systems,http://arxiv.org/abs/2106.09592v2,"Data lakes are becoming increasingly prevalent for big data management and data analytics. In contrast to traditional 'schema-on-write' approaches such as data warehouses, data lakes are repositories storing raw data in its original formats and providing a common access interface. Despite the strong interest raised from both academia and industry, there is a large body of ambiguity regarding the definition, functions and available technologies for data lakes. A complete, coherent picture of data lake challenges and solutions is still missing. This survey reviews the development, architectures, and systems of data lakes. We provide a comprehensive overview of research questions for designing and building data lakes. We classify the existing approaches and systems based on their provided functions for data lakes, which makes this survey a useful technical reference for designing, implementing and deploying data lakes. We hope that the thorough comparison of existing solutions and the discussion of open research challenges in this survey will motivate the future development of data lake research and practice.",True,True,"Rihan Hai and Christos Koutras and Christoph Quix and Matthias Jarke",2023.0,,,,{IEEE} Trans. Knowl. Data Eng.,Data Lakes: A Survey of Functions and Systems,Data Lakes: A Survey of Functions and Systems,http://arxiv.org/pdf/2106.09592v2,"Data lakes are becoming increasingly prevalent for big data management and data analytics. In contrast to traditional 'schema-on-write' approaches such as data warehouses, data lakes are repositories storing raw data in its original formats and providing a common access interface. Despite the strong interest raised from both academia and industry, there is a large body of ambiguity regarding the definition, functions and available technologies for data lakes. A complete, coherent picture of data lake challenges and solutions is still missing. This survey reviews the development, architectures, and systems of data lakes. We provide a comprehensive overview of research questions for designing and building data lakes. We classify the existing approaches and systems based on their provided functions for data lakes, which makes this survey a useful technical reference for designing, implementing and deploying data lakes. We hope that the thorough comparison of existing solutions and the discussion of open research challenges in this survey will motivate the future development of data lake research and practice." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,AdelfioS13,\cite{AdelfioS13},Schema Extraction for Tabular Data on the Web,,,True,False,"Marco D. Adelfio and Hanan Samet",2013.0,,,,Proc. {VLDB} Endow.,Schema Extraction for Tabular Data on the Web,[PDF] Schema Extraction for Tabular Data on the Web ∗ - VLDB Endowment,http://www.vldb.org/pvldb/vol6/p421-adelfio.pdf,The schemas of these data ta- bles are determined using a classification technique based on conditional random fields in combination with a novel fea- ture "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,GoogleSearch,\cite{GoogleSearch},"Google Dataset Search: Building a search engine for datasets in an open Web ecosystem",,,True,False,"Dan Brickley and Matthew Burgess and Natasha F. Noy",2019.0,,,,,"Google Dataset Search: Building a search engine for datasets in an open Web ecosystem",Building a search engine for datasets in an open Web ecosystem,https://research.google/pubs/google-dataset-search-building-a-search-engine-for-datasets-in-an-open-web-ecosystem/,"In this paper, we discuss Google Dataset Search, a dataset-discovery tool that provides search capabilities over potentially all datasets published on the Web." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,JOSIE,\cite{JOSIE},{JOSIE:} Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes,,,True,False,"Erkang Zhu and Dong Deng and Fatemeh Nargesian and Ren{\'{e}}e J. Miller",2019.0,,,,,{JOSIE:} Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes,JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in ...,https://dl.acm.org/doi/10.1145/3299869.3300065,- JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes # JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes We show that JOSIE completely out performs the state-of-the-art overlap set similarity search techniques on data lakes. 1. JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes Similarity search for data streams has attracted much attention recently in the area of information recommendation. - Mann WAugsten NJensen CPawlik M(2024)SWOOP: top-k similarity joins over set streamsThe VLDB Journal — The International Journal on Very Large Data Bases10.1007/s00778-024-00880-x**34**:1Online publication date: 23-Dec-2024 "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,Deepjoin,\cite{Deepjoin},DeepJoin: Joinable Table Discovery with Pre-trained Language Models,http://arxiv.org/abs/2212.07588v2,"Due to the usefulness in data enrichment for data analysis tasks, joinable table discovery has become an important operation in data lake management. Existing approaches target equi-joins, the most common way of combining tables for creating a unified view, or semantic joins, which tolerate misspellings and different formats to deliver more join results. They are either exact solutions whose running time is linear in the sizes of query column and target table repository or approximate solutions lacking precision. In this paper, we propose Deepjoin, a deep learning model for accurate and efficient joinable table discovery. Our solution is an embedding-based retrieval, which employs a pre-trained language model (PLM) and is designed as one framework serving both equi- and semantic joins. We propose a set of contextualization options to transform column contents to a text sequence. The PLM reads the sequence and is fine-tuned to embed columns to vectors such that columns are expected to be joinable if they are close to each other in the vector space. Since the output of the PLM is fixed in length, the subsequent search procedure becomes independent of the column size. With a state-of-the-art approximate nearest neighbor search algorithm, the search time is logarithmic in the repository size. To train the model, we devise the techniques for preparing training data as well as data augmentation. The experiments on real datasets demonstrate that by training on a small subset of a corpus, Deepjoin generalizes to large datasets and its precision consistently outperforms other approximate solutions'. Deepjoin is even more accurate than an exact solution to semantic joins when evaluated with labels from experts. Moreover, when equipped with a GPU, Deepjoin is up to two orders of magnitude faster than existing solutions.",True,True,"Yuyang Dong and Chuan Xiao and Takuma Nozawa and Masafumi Enomoto and Masafumi Oyamada",2023.0,,,,Proc. {VLDB} Endow.,DeepJoin: Joinable Table Discovery with Pre-trained Language Models,[PDF] DeepJoin: Joinable Table Discovery with Pre-trained Language ...,https://www.vldb.org/pvldb/vol16/p2458-dong.pdf,"DeepJoin is a deep learning model using a pre-trained language model for joinable table discovery, handling both equi- and semantic joins." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,Snoopy,\cite{Snoopy},"Snoopy: Effective and Efficient Semantic Join Discovery via Proxy Columns",http://arxiv.org/abs/2502.16813v1,"Semantic join discovery, which aims to find columns in a table repository with high semantic joinabilities to a query column, is crucial for dataset discovery. Existing methods can be divided into two categories: cell-level methods and column-level methods. However, neither of them ensures both effectiveness and efficiency simultaneously. Cell-level methods, which compute the joinability by counting cell matches between columns, enjoy ideal effectiveness but suffer poor efficiency. In contrast, column-level methods, which determine joinability only by computing the similarity of column embeddings, enjoy proper efficiency but suffer poor effectiveness due to the issues occurring in their column embeddings: (i) semantics-joinability-gap, (ii) size limit, and (iii) permutation sensitivity. To address these issues, this paper proposes to compute column embeddings via proxy columns; furthermore, a novel column-level semantic join discovery framework, Snoopy, is presented, leveraging proxy-column-based embeddings to bridge effectiveness and efficiency. Specifically, the proposed column embeddings are derived from the implicit column-to-proxy-column relationships, which are captured by the lightweight approximate-graph-matching-based column projection.To acquire good proxy columns for guiding the column projection, we introduce a rank-aware contrastive learning paradigm. Extensive experiments on four real-world datasets demonstrate that Snoopy outperforms SOTA column-level methods by 16% in Recall@25 and 10% in NDCG@25, and achieves superior efficiency--being at least 5 orders of magnitude faster than cell-level solutions, and 3.5x faster than existing column-level methods.",True,True,"Guo, Yuxiang and Mao, Yuren and Hu, Zhonghao and Chen, Lu and Gao, Yunjun",2025.0,,,,arXiv preprint arXiv:2502.16813,"Snoopy: Effective and Efficient Semantic Join Discovery via Proxy Columns",Effective and Efficient Semantic Join Discovery via Proxy Columns,https://arxiv.org/abs/2502.16813,"A novel column-level semantic join discovery framework, Snoopy, is presented, leveraging proxy-column-based embeddings to bridge effectiveness and efficiency." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,starmine,\cite{starmine},"Semantics-aware Dataset Discovery from Data Lakes with Contextualized Column-based Representation Learning",http://arxiv.org/abs/2210.01922v2,"Dataset discovery from data lakes is essential in many real application scenarios. In this paper, we propose Starmie, an end-to-end framework for dataset discovery from data lakes (with table union search as the main use case). Our proposed framework features a contrastive learning method to train column encoders from pre-trained language models in a fully unsupervised manner. The column encoder of Starmie captures the rich contextual semantic information within tables by leveraging a contrastive multi-column pre-training strategy. We utilize the cosine similarity between column embedding vectors as the column unionability score and propose a filter-and-verification framework that allows exploring a variety of design choices to compute the unionability score between two tables accordingly. Empirical evaluation results on real table benchmark datasets show that Starmie outperforms the best-known solutions in the effectiveness of table union search by 6.8 in MAP and recall. Moreover, Starmie is the first to employ the HNSW (Hierarchical Navigable Small World) index for accelerate query processing of table union search which provides a 3,000X performance gain over the linear scan baseline and a 400X performance gain over an LSH index (the state-of-the-art solution for data lake indexing).",True,True,"Grace Fan and Jin Wang and Yuliang Li and Dan Zhang and Ren{\'{e}}e J. Miller",2023.0,,,,Proc. {VLDB} Endow.,"Semantics-aware Dataset Discovery from Data Lakes with Contextualized Column-based Representation Learning",Semantics-aware Dataset Discovery from Data Lakes with ...,https://www.researchgate.net/publication/364194737_Semantics-aware_Dataset_Discovery_from_Data_Lakes_with_Contextualized_Column-based_Representation_Learning,Our proposed framework features a contrastive learning method to train column encoders from pre-trained language models in a fully unsupervised "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,santos,\cite{santos},SANTOS: Relationship-based Semantic Table Union Search,http://arxiv.org/abs/2209.13589v1,"Existing techniques for unionable table search define unionability using metadata (tables must have the same or similar schemas) or column-based metrics (for example, the values in a table should be drawn from the same domain). In this work, we introduce the use of semantic relationships between pairs of columns in a table to improve the accuracy of union search. Consequently, we introduce a new notion of unionability that considers relationships between columns, together with the semantics of columns, in a principled way. To do so, we present two new methods to discover semantic relationship between pairs of columns. The first uses an existing knowledge base (KB), the second (which we call a ""synthesized KB"") uses knowledge from the data lake itself. We adopt an existing Table Union Search benchmark and present new (open) benchmarks that represent small and large real data lakes. We show that our new unionability search algorithm, called SANTOS, outperforms a state-of-the-art union search that uses a wide variety of column-based semantics, including word embeddings and regular expressions. We show empirically that our synthesized KB improves the accuracy of union search by representing relationship semantics that may not be contained in an available KB. This result hints at a promising future of creating a synthesized KBs from data lakes with limited KB coverage and using them for union search.",True,True,"Aamod Khatiwada and Grace Fan and Roee Shraga and Zixuan Chen and Wolfgang Gatterbauer and Ren{\'{e}}e J. Miller and Mirek Riedewald",2023.0,,,,Proc. {ACM} Manag. Data,SANTOS: Relationship-based Semantic Table Union Search,SANTOS: Relationship-based Semantic Table Union Search,https://dl.acm.org/doi/10.1145/3588689,"Our new unionability search algorithm, called SANTOS, outperforms a state-of-the-art union search that uses a wide variety of column-based semantics." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,TUS,\cite{TUS},Table Union Search on Open Data,,,True,False,"Fatemeh Nargesian and Erkang Zhu and Ken Q. Pu and Ren{\'{e}}e J. Miller",2018.0,,,,Proc. {VLDB} Endow.,Table Union Search on Open Data,[PDF] Table Union Search on Open Data,https://www.semanticscholar.org/paper/Table-Union-Search-on-Open-Data-Nargesian-Zhu/5cadff7988d29c1596689d5b864f87f371783a50,This work defines the table union search problem and presents a probabilistic solution for finding tables that are unionable with a query table within "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,Solo,\cite{Solo},"Solo: Data Discovery Using Natural Language Questions Via A Self-Supervised Approach",http://arxiv.org/abs/2301.03560v2,"Most deployed data discovery systems, such as Google Datasets, and open data portals only support keyword search. Keyword search is geared towards general audiences but limits the types of queries the systems can answer. We propose a new system that lets users write natural language questions directly. A major barrier to using this learned data discovery system is it needs expensive-to-collect training data, thus limiting its utility. In this paper, we introduce a self-supervised approach to assemble training datasets and train learned discovery systems without human intervention. It requires addressing several challenges, including the design of self-supervised strategies for data discovery, table representation strategies to feed to the models, and relevance models that work well with the synthetically generated questions. We combine all the above contributions into a system, Solo, that solves the problem end to end. The evaluation results demonstrate the new techniques outperform state-of-the-art approaches on well-known benchmarks. All in all, the technique is a stepping stone towards building learned discovery systems. The code is open-sourced at https://github.com/TheDataStation/solo",True,True,"Qiming Wang and Raul Castro Fernandez",2023.0,,,,Proc. {ACM} Manag. Data,"Solo: Data Discovery Using Natural Language Questions Via A Self-Supervised Approach",[PDF] Solo: Data Discovery Using Natural Language Questions Via A Self ...,https://arxiv.org/pdf/2301.03560,"Solo is a system that allows users to write natural language questions for data discovery, using a self-supervised approach to train the system." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,OpenDTR,\cite{OpenDTR},Open Domain Question Answering over Tables via Dense Retrieval,http://arxiv.org/abs/2103.12011v2,"Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever.",True,True,"Jonathan Herzig and Thomas M{\""{u}}ller and Syrine Krichene and Julian Martin Eisenschlos",2021.0,,,,,Open Domain Question Answering over Tables via Dense Retrieval,Open Domain Question Answering over Tables via Dense Retrieval,http://arxiv.org/pdf/2103.12011v2,"Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,OpenWiki,\cite{OpenWiki},"Open-WikiTable : Dataset for Open Domain Question Answering with Complex Reasoning over Table",,,True,False,"Sunjun Kweon and Yeonsu Kwon and Seonhee Cho and Yohan Jo and Edward Choi",2023.0,,,,,"Open-WikiTable : Dataset for Open Domain Question Answering with Complex Reasoning over Table",Open-WikiTable :Dataset for Open Domain Question Answering with ...,https://github.com/sean0042/Open_WikiTable,The first ODQA dataset that requires complex reasoning over tables. Open-WikiTable is built upon WikiSQL and WikiTableQuestions to be applicable in the open- "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,TAPAS,\cite{TAPAS},TAPAS: Weakly Supervised Table Parsing via Pre-training,http://arxiv.org/abs/2004.02349v2,"Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.",True,True,"Jonathan Herzig and Pawel Krzysztof Nowak and Thomas M{\""{u}}ller and Francesco Piccinno and Julian Martin Eisenschlos",2020.0,,,,,TAPAS: Weakly Supervised Table Parsing via Pre-training,TaPas: Weakly Supervised Table Parsing via Pre-training,https://aclanthology.org/2020.acl-main.398/,"by J Herzig · 2020 · Cited by 784 — TaPas trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such" "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,GTR,\cite{GTR},"Retrieving Complex Tables with Multi-Granular Graph Representation Learning",http://arxiv.org/abs/2105.01736v1,"The task of natural language table retrieval (NLTR) seeks to retrieve semantically relevant tables based on natural language queries. Existing learning systems for this task often treat tables as plain text based on the assumption that tables are structured as dataframes. However, tables can have complex layouts which indicate diverse dependencies between subtable structures, such as nested headers. As a result, queries may refer to different spans of relevant content that is distributed across these structures. Moreover, such systems fail to generalize to novel scenarios beyond those seen in the training set. Prior methods are still distant from a generalizable solution to the NLTR problem, as they fall short in handling complex table layouts or queries over multiple granularities. To address these issues, we propose Graph-based Table Retrieval (GTR), a generalizable NLTR framework with multi-granular graph representation learning. In our framework, a table is first converted into a tabular graph, with cell nodes, row nodes and column nodes to capture content at different granularities. Then the tabular graph is input to a Graph Transformer model that can capture both table cell content and the layout structures. To enhance the robustness and generalizability of the model, we further incorporate a self-supervised pre-training task based on graph-context matching. Experimental results on two benchmarks show that our method leads to significant improvements over the current state-of-the-art systems. Further experiments demonstrate promising performance of our method on cross-dataset generalization, and enhanced capability of handling complex tables and fulfilling diverse query intents. Code and data are available at https://github.com/FeiWang96/GTR.",True,True,"Fei Wang and Kexuan Sun and Muhao Chen and Jay Pujara and Pedro A. Szekely",2021.0,,,,,"Retrieving Complex Tables with Multi-Granular Graph Representation Learning",[PDF] Retrieving Complex Tables with Multi-Granular Graph ... - arXiv,https://arxiv.org/pdf/2105.01736,GTR leverages state-of-the-art graph representation learning techniques to capture both content and layout structures of complex tables. "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,AdHoc_TR,\cite{AdHoc_TR},Ad Hoc Table Retrieval using Semantic Similarity,http://arxiv.org/abs/1802.06159v3,"We introduce and address the problem of ad hoc table retrieval: answering a keyword query with a ranked list of tables. This task is not only interesting on its own account, but is also being used as a core component in many other table-based information access scenarios, such as table completion or table mining. The main novel contribution of this work is a method for performing semantic matching between queries and tables. Specifically, we (i) represent queries and tables in multiple semantic spaces (both discrete sparse and continuous dense vector representations) and (ii) introduce various similarity measures for matching those semantic representations. We consider all possible combinations of semantic representations and similarity measures and use these as features in a supervised learning model. Using a purpose-built test collection based on Wikipedia tables, we demonstrate significant and substantial improvements over a state-of-the-art baseline.",True,True,"Shuo Zhang and Krisztian Balog",2018.0,,,,,Ad Hoc Table Retrieval using Semantic Similarity,Ad Hoc Table Retrieval using Semantic Similarity,http://arxiv.org/pdf/1802.06159v3,"We introduce and address the problem of ad hoc table retrieval: answering a keyword query with a ranked list of tables. This task is not only interesting on its own account, but is also being used as a core component in many other table-based information access scenarios, such as table completion or table mining. The main novel contribution of this work is a method for performing semantic matching between queries and tables. Specifically, we (i) represent queries and tables in multiple semantic spaces (both discrete sparse and continuous dense vector representations) and (ii) introduce various similarity measures for matching those semantic representations. We consider all possible combinations of semantic representations and similarity measures and use these as features in a supervised learning model. Using a purpose-built test collection based on Wikipedia tables, we demonstrate significant and substantial improvements over a state-of-the-art baseline." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,TableSearch,\cite{TableSearch},Table Search Using a Deep Contextualized Language Model,http://arxiv.org/abs/2005.09207v2,"Pretrained contextualized language models such as BERT have achieved impressive results on various natural language processing benchmarks. Benefiting from multiple pretraining tasks and large scale training corpora, pretrained models can capture complex syntactic word relations. In this paper, we use the deep contextualized language model BERT for the task of ad hoc table retrieval. We investigate how to encode table content considering the table structure and input length limit of BERT. We also propose an approach that incorporates features from prior literature on table retrieval and jointly trains them with BERT. In experiments on public datasets, we show that our best approach can outperform the previous state-of-the-art method and BERT baselines with a large margin under different evaluation metrics.",True,True,"Zhiyu Chen and Mohamed Trabelsi and Jeff Heflin and Yinan Xu and Brian D. Davison",2020.0,,,,,Table Search Using a Deep Contextualized Language Model,Table Search Using a Deep Contextualized Language Model,http://arxiv.org/pdf/2005.09207v2,"Pretrained contextualized language models such as BERT have achieved impressive results on various natural language processing benchmarks. Benefiting from multiple pretraining tasks and large scale training corpora, pretrained models can capture complex syntactic word relations. In this paper, we use the deep contextualized language model BERT for the task of ad hoc table retrieval. We investigate how to encode table content considering the table structure and input length limit of BERT. We also propose an approach that incorporates features from prior literature on table retrieval and jointly trains them with BERT. In experiments on public datasets, we show that our best approach can outperform the previous state-of-the-art method and BERT baselines with a large margin under different evaluation metrics." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,DSI,\cite{DSI},Transformer Memory as a Differentiable Search Index,http://arxiv.org/abs/2202.06991v3,"In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.",True,True,"Tay, Yi and Tran, Vinh Q and Dehghani, Mostafa and Ni, Jianmo and Bahri, Dara and Mehta, Harsh and Qin, Zhen and Hui, Kai and Zhao, Zhe and Gupta, Jai and others",2022.0,,,,,Transformer Memory as a Differentiable Search Index,Transformer Memory as a Differentiable Search Index,http://arxiv.org/pdf/2202.06991v3,"In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,NCI,\cite{NCI},A Neural Corpus Indexer for Document Retrieval,http://arxiv.org/abs/2206.02743v3,"Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method.",True,True,"Wang, Yujing and Hou, Yingyan and Wang, Haonan and Miao, Ziming and Wu, Shibin and Sun, Hao and Chen, Qi and Xia, Yuqing and Chi, Chengmin and Zhao, Guoshuai and others",2022.0,,,,,A Neural Corpus Indexer for Document Retrieval,A Neural Corpus Indexer for Document Retrieval,http://arxiv.org/pdf/2206.02743v3,"Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,DSI-QG,\cite{DSI-QG},"Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation",http://arxiv.org/abs/2206.10128v3,"The Differentiable Search Index (DSI) is an emerging paradigm for information retrieval. Unlike traditional retrieval architectures where index and retrieval are two different and separate components, DSI uses a single transformer model to perform both indexing and retrieval. In this paper, we identify and tackle an important issue of current DSI models: the data distribution mismatch that occurs between the DSI indexing and retrieval processes. Specifically, we argue that, at indexing, current DSI methods learn to build connections between the text of long documents and the identifier of the documents, but then retrieval of document identifiers is based on queries that are commonly much shorter than the indexed documents. This problem is further exacerbated when using DSI for cross-lingual retrieval, where document text and query text are in different languages. To address this fundamental problem of current DSI models, we propose a simple yet effective indexing framework for DSI, called DSI-QG. When indexing, DSI-QG represents documents with a number of potentially relevant queries generated by a query generation model and re-ranked and filtered by a cross-encoder ranker. The presence of these queries at indexing allows the DSI models to connect a document identifier to a set of queries, hence mitigating data distribution mismatches present between the indexing and the retrieval phases. Empirical results on popular mono-lingual and cross-lingual passage retrieval datasets show that DSI-QG significantly outperforms the original DSI model.",True,True,"Shengyao Zhuang and Houxing Ren and Linjun Shou and Jian Pei and Ming Gong and Guido Zuccon and Daxin Jiang",2022.0,,,,CoRR,"Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation",Bridging the Gap Between Indexing and Retrieval for Differentiable ...,https://arxiv.org/abs/2206.10128,Missing: 04/08/2025 "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,CorpusLM,\cite{CorpusLM},"CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks",http://arxiv.org/abs/2402.01176v2,"Large language models (LLMs) have gained significant attention in various fields but prone to hallucination, especially in knowledge-intensive (KI) tasks. To address this, retrieval-augmented generation (RAG) has emerged as a popular solution to enhance factual accuracy. However, traditional retrieval modules often rely on large document index and disconnect with generative tasks. With the advent of generative retrieval (GR), language models can retrieve by directly generating document identifiers (DocIDs), offering superior performance in retrieval tasks. However, the potential relationship between GR and downstream tasks remains unexplored. In this paper, we propose \textbf{CorpusLM}, a unified language model that leverages external corpus to tackle various knowledge-intensive tasks by integrating generative retrieval, closed-book generation, and RAG through a unified greedy decoding process. We design the following mechanisms to facilitate effective retrieval and generation, and improve the end-to-end effectiveness of KI tasks: (1) We develop a ranking-oriented DocID list generation strategy, which refines GR by directly learning from a DocID ranking list, to improve retrieval quality. (2) We design a continuous DocIDs-References-Answer generation strategy, which facilitates effective and efficient RAG. (3) We employ well-designed unsupervised DocID understanding tasks, to comprehend DocID semantics and their relevance to downstream tasks. We evaluate our approach on the widely used KILT benchmark with two variants of backbone models, i.e., T5 and Llama2. Experimental results demonstrate the superior performance of our models in both retrieval and downstream tasks.",True,True,"Xiaoxi Li and Zhicheng Dou and Yujia Zhou and Fangchao Liu",2024.0,,,,,"CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks",CorpusLM: Towards a Unified Language Model on Corpus ...,https://dl.acm.org/doi/10.1145/3626772.3657778,"In this paper, we propose CorpusLM, a unified language model that leverages external corpus to tackle various knowledge-intensive tasks." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,Tiger,\cite{Tiger},Recommender Systems with Generative Retrieval,http://arxiv.org/abs/2305.05065v3,"Modern recommender systems perform large-scale retrieval by first embedding queries and item candidates in the same unified space, followed by approximate nearest neighbor search to select top candidates given a query embedding. In this paper, we propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates. To that end, we create semantically meaningful tuple of codewords to serve as a Semantic ID for each item. Given Semantic IDs for items in a user session, a Transformer-based sequence-to-sequence model is trained to predict the Semantic ID of the next item that the user will interact with. To the best of our knowledge, this is the first Semantic ID-based generative model for recommendation tasks. We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets. In addition, we show that incorporating Semantic IDs into the sequence-to-sequence model enhances its ability to generalize, as evidenced by the improved retrieval performance observed for items with no prior interaction history.",True,True,"Rajput, Shashank and Mehta, Nikhil and Singh, Anima and Keshavan, Raghunandan and Vu, Trung and Heidt, Lukasz and Hong, Lichan and Tay, Yi and Tran, Vinh Q and Samost, Jonah and others",2023.0,,,,,Recommender Systems with Generative Retrieval,Recommender Systems with Generative Retrieval,http://arxiv.org/pdf/2305.05065v3,"Modern recommender systems perform large-scale retrieval by first embedding queries and item candidates in the same unified space, followed by approximate nearest neighbor search to select top candidates given a query embedding. In this paper, we propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates. To that end, we create semantically meaningful tuple of codewords to serve as a Semantic ID for each item. Given Semantic IDs for items in a user session, a Transformer-based sequence-to-sequence model is trained to predict the Semantic ID of the next item that the user will interact with. To the best of our knowledge, this is the first Semantic ID-based generative model for recommendation tasks. We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets. In addition, we show that incorporating Semantic IDs into the sequence-to-sequence model enhances its ability to generalize, as evidenced by the improved retrieval performance observed for items with no prior interaction history." "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,DSI++,\cite{DSI++},{DSI++:} Updating Transformer Memory with New Documents,,,True,False,"Sanket Vaibhav Mehta and Jai Gupta and Yi Tay and Mostafa Dehghani and Vinh Q. Tran and Jinfeng Rao and Marc Najork and Emma Strubell and Donald Metzler",2023.0,,,,,{DSI++:} Updating Transformer Memory with New Documents,DSI++: Updating Transformer Memory with New Documents,https://aclanthology.org/2023.emnlp-main.510/,"DSI++: Updating Transformer Memory with New Documents - ACL Anthology Anthology ID:2023.emnlp-main.510 Volume:Proceedings of the 2023 Conference on Empirical Methods in Natural Language ProcessingMonth:December Year:2023 Address:Singapore Editors:Houda Bouamor, Juan Pino, Kalika BaliVenue:EMNLPSIG:Publisher:Association for Computational Linguistics Note:Pages:8198–8213 Language:URL:https://aclanthology.org/2023.emnlp-main.510/DOI:10.18653/v1/2023.emnlp-main.510Bibkey:mehta-etal-2023-dsi Cite (ACL):Sanket Vaibhav Mehta, Jai Gupta, Yi Tay, Mostafa Dehghani, Vinh Q. Association for Computational Linguistics.Cite (Informal):DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)Copy Citation:BibTeX Markdown MODS XML Endnote More options…PDF:https://aclanthology.org/2023.emnlp-main.510.pdfVideo:https://aclanthology.org/2023.emnlp-main.510.mp4 title = ""{DSI}++: Updating Transformer Memory with New Documents"", DSI++: Updating Transformer Memory with New Documents Mehta Houda Juan Kalika DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023) * DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)" "Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index",2504.21282v1,CLEVER,\cite{CLEVER},Continual Learning for Generative Retrieval over Dynamic Corpora,http://arxiv.org/abs/2308.14968v1,"Generative retrieval (GR) directly predicts the identifiers of relevant documents (i.e., docids) based on a parametric model. It has achieved solid performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a static document collection. In many practical scenarios, however, document collections are dynamic, where new documents are continuously added to the corpus. The ability to incrementally index new documents while preserving the ability to answer queries with both previously and newly indexed relevant documents is vital to applying GR models. In this paper, we address this practical continual learning problem for GR. We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents. Empirical results demonstrate the effectiveness and efficiency of the proposed model.",True,True,"Jiangui Chen and Ruqing Zhang and Jiafeng Guo and Maarten de Rijke and Wei Chen and Yixing Fan and Xueqi Cheng",2023.0,,,,,Continual Learning for Generative Retrieval over Dynamic Corpora,Continual Learning for Generative Retrieval over Dynamic Corpora,http://arxiv.org/pdf/2308.14968v1,"Generative retrieval (GR) directly predicts the identifiers of relevant documents (i.e., docids) based on a parametric model. It has achieved solid performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a static document collection. In many practical scenarios, however, document collections are dynamic, where new documents are continuously added to the corpus. The ability to incrementally index new documents while preserving the ability to answer queries with both previously and newly indexed relevant documents is vital to applying GR models. In this paper, we address this practical continual learning problem for GR. We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents. Empirical results demonstrate the effectiveness and efficiency of the proposed model." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,ErroDetection,\cite{ErroDetection},Exploiting Active Learning in Novel Refractive Error Detection with Smartphones,,,True,False,"Fu, Eugene Yujun and Yang, Zhongqi and Leong, Hong Va and Ngai, Grace and Do, Chi-wai and Chan, Lily",2020.0,,,,,Exploiting Active Learning in Novel Refractive Error Detection with Smartphones,Exploiting active learning in novel refractive error detection with ...,https://repository.eduhk.hk/en/publications/exploiting-active-learning-in-novel-refractive-error-detection-wi,Dive into the research topics of 'Exploiting active learning in novel refractive error detection with smartphones'. Together they form a unique fingerprint. "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,ImageCaption,\cite{ImageCaption},Structural Semantic Adversarial Active Learning for Image Captioning,,,True,False,"Zhang, Beichen and Li, Liang and Su, Li and Wang, Shuhui and Deng, Jincan and Zha, Zheng-Jun and Huang, Qingming",2020.0,,,,,Structural Semantic Adversarial Active Learning for Image Captioning,Structural Semantic Adversarial Active Learning for Image Captioning,https://dl.acm.org/doi/abs/10.1145/3394171.3413885,We propose a structural semantic adversarial active learning (SSAAL) model that leverages both visual and textual information for deriving the most "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,PersonIdentification,\cite{PersonIdentification},Cluster and Scatter: A Multi-Grained Active Semi-Supervised Learning Framework for Scalable Person Re-Identification,,,True,False,"Hu, Bingyu and Zha, Zheng-Jun and Liu, Jiawei and Zhu, Xierong and Xie, Hongtao",2021.0,,,,,Cluster and Scatter: A Multi-Grained Active Semi-Supervised Learning Framework for Scalable Person Re-Identification,arXiv:2204.10008v1 [cs.CV] 21 Apr 2022,https://arxiv.org/pdf/2204.10008,"by D Jin · 2022 · Cited by 4 — Cluster and scatter: A multi-grained active semi-supervised learning framework for scalable person re-identification. In ACMMM, pages. 2605" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,lewis1994heterogeneous,\cite{lewis1994heterogeneous},Heterogeneous uncertainty sampling for supervised learning,,,True,False,"Lewis, David D and Catlett, Jason",1994.0,,,,,Heterogeneous uncertainty sampling for supervised learning,Heterogeneous Uncertainty Sampling for Supervised ...,https://www.sciencedirect.com/science/article/pii/B978155860335650026X,by DD Lewis · 1994 · Cited by 1814 — Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,lewis1994sequential,\cite{lewis1994sequential},A Sequential Algorithm for Training Text Classifiers,http://arxiv.org/abs/cmp-lg/9407020v2,"The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.",True,True,"Lewis, David D and Gale, William A",1994.0,,,,,A Sequential Algorithm for Training Text Classifiers,A Sequential Algorithm for Training Text Classifiers,http://arxiv.org/pdf/cmp-lg/9407020v2,"The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,joshi2009multi,\cite{joshi2009multi},Active Learning for Multi-class Image Classification,http://arxiv.org/abs/2505.06825v1,"A principle bottleneck in image classification is the large number of training examples needed to train a classifier. Using active learning, we can reduce the number of training examples to teach a CNN classifier by strategically selecting examples. Assigning values to image examples using different uncertainty metrics allows the model to identify and select high-value examples in a smaller training set size. We demonstrate results for digit recognition and fruit classification on the MNIST and Fruits360 data sets. We formally compare results for four different uncertainty metrics. Finally, we observe active learning is also effective on simpler (binary) classification tasks, but marked improvement from random sampling is more evident on more difficult tasks. We show active learning is a viable algorithm for image classification problems.",True,True,"Joshi, Ajay J and Porikli, Fatih and Papanikolopoulos, Nikolaos",2009.0,,,,,Active Learning for Multi-class Image Classification,Multi-Class Active Learning for Image Classification,https://porikli.com/mysite/pdfs/porikli%202009%20-%20Multi-Class%20Active%20Learning%20for%20Image%20Classification.pdf,"by AJ Joshi · Cited by 989 — In this paper, we have proposed a simple active learning method for multi-class image classification. The proposed method achieves significant reduction in" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,luo2013latent,\cite{luo2013latent},Latent structured active learning,,,True,False,"Luo, Wenjie and Schwing, Alex and Urtasun, Raquel",2013.0,,,,NeurIPS,Latent structured active learning,[PDF] Latent Structured Active Learning - Alexander Schwing,https://www.alexander-schwing.de/papers/LuoEtAl_NIPS2013.pdf,In this paper we present active learning algorithms in the context of structured prediction problems. To reduce the amount of labeling necessary to learn "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,settles2012active,\cite{settles2012active},Active learning: Synthesis lectures on artificial intelligence and machine learning,,,True,False,"Settles, Burr",2012.0,,,,Morgan {\&} Claypool Publishers,Active learning: Synthesis lectures on artificial intelligence and machine learning,Active Learning - Book,https://link.springer.com/book/10.1007/978-3-031-01560-1,by B Settles · Cited by 3007 — Part of the book series: Synthesis Lectures on Artificial Intelligence and Machine Learning (SLAIML) ... The key idea behind active learning is that a machine "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,blundell2015weight,\cite{blundell2015weight},Weight Uncertainty in Neural Networks,http://arxiv.org/abs/1505.05424v2,"We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.",True,True,"Blundell, Charles and Cornebise, Julien and Kavukcuoglu, Koray and Wierstra, Daan",2015.0,,,,,Weight Uncertainty in Neural Networks,Weight Uncertainty in Neural Networks,http://arxiv.org/pdf/1505.05424v2,"We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,gal2016dropout,\cite{gal2016dropout},"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning",http://arxiv.org/abs/1506.02142v6,"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.",True,True,Yarin Gal and Zoubin Ghahramani,2016.0,,,,,"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning",Representing Model Uncertainty in Deep Learning - arXiv,https://arxiv.org/abs/1506.02142,In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,huang2021semi,\cite{huang2021semi},Semi-Supervised Active Learning with Temporal Output Discrepancy,,,True,False,"Huang, Siyu and Wang, Tianyang and Xiong, Haoyi and Huan, Jun and Dou, Dejing",2021.0,,,,,Semi-Supervised Active Learning with Temporal Output Discrepancy,Supplementary Material: Semi-Supervised Active Learning ...,https://openaccess.thecvf.com/content/ICCV2021/supplemental/Huang_Semi-Supervised_Active_Learning_ICCV_2021_supplemental.pdf,Semi-Supervised Active Learning with Temporal Output Discrepancy. Siyu Huang1. Tianyang Wang2. Haoyi Xiong1. Jun Huan3. Dejing Dou1. 1Baidu Research. 2Austin "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,guo2010active,\cite{guo2010active},Active instance sampling via matrix partition.,,,True,False,"Guo, Yuhong",2010.0,,,,,Active instance sampling via matrix partition.,Active instance sampling via matrix partition - Volume 1,https://dl.acm.org/doi/10.5555/2997189.2997279,"by Y Guo · 2010 · Cited by 183 — By employing a Gaussian process framework, this mutual information based instance selection problem can be formulated as a matrix partition problem. Although" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,yang2015multi,\cite{yang2015multi},Multi-class active learning by uncertainty sampling with diversity maximization,,,True,False,"Yang, Yi and Ma, Zhigang and Nie, Feiping and Chang, Xiaojun and Hauptmann, Alexander G",2015.0,,,,Int. J. Comput. Vis.,Multi-class active learning by uncertainty sampling with diversity maximization,Multi-class active learning by uncertainty sampling with diversity ...,https://research.monash.edu/en/publications/multi-class-active-learning-by-uncertainty-sampling-with-diversit,"As a multi-class active learning algorithm, our algorithm is able to exploit uncertainty across multiple classes. An efficient algorithm is used to optimize the" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,nguyen2004active,\cite{nguyen2004active},Active learning using pre-clustering,,,True,False,"Nguyen, Hieu T and Smeulders, Arnold",2004.0,,,,,Active learning using pre-clustering,Active learning using pre-clustering | Proceedings of the ...,https://dl.acm.org/doi/10.1145/1015330.1015349,The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,sener2018active,\cite{sener2018active},Active Learning for Convolutional Neural Networks: A Core-Set Approach,http://arxiv.org/abs/1708.00489v4,"Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.",True,True,"Sener, Ozan and Savarese, Silvio",2018.0,,,,,Active Learning for Convolutional Neural Networks: A Core-Set Approach,Active Learning for Convolutional Neural Networks: A Core ...,https://arxiv.org/abs/1708.00489,"by O Sener · 2017 · Cited by 2576 — We define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,yoo2019learning,\cite{yoo2019learning},Learning Loss for Active Learning,http://arxiv.org/abs/1905.03677v1,"The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named ""loss prediction module,"" to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.",True,True,"Yoo, Donggeun and Kweon, In So",2019.0,,,,,Learning Loss for Active Learning,Learning Loss for Active Learning,http://arxiv.org/pdf/1905.03677v1,"The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named ""loss prediction module,"" to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,yuan2021multiple,\cite{yuan2021multiple},Multiple instance active learning for object detection,http://arxiv.org/abs/2104.02324v1,"Despite the substantial progress of active learning for image recognition, there still lacks an instance-level active learning method specified for object detection. In this paper, we propose Multiple Instance Active Object Detection (MI-AOD), to select the most informative images for detector training by observing instance-level uncertainty. MI-AOD defines an instance uncertainty learning module, which leverages the discrepancy of two adversarial instance classifiers trained on the labeled set to predict instance uncertainty of the unlabeled set. MI-AOD treats unlabeled images as instance bags and feature anchors in images as instances, and estimates the image uncertainty by re-weighting instances in a multiple instance learning (MIL) fashion. Iterative instance uncertainty learning and re-weighting facilitate suppressing noisy instances, toward bridging the gap between instance uncertainty and image-level uncertainty. Experiments validate that MI-AOD sets a solid baseline for instance-level active learning. On commonly used object detection datasets, MI-AOD outperforms state-of-the-art methods with significant margins, particularly when the labeled sets are small. Code is available at https://github.com/yuantn/MI-AOD.",True,True,"Yuan, Tianning and Wan, Fang and Fu, Mengying and Liu, Jianzhuang and Xu, Songcen and Ji, Xiangyang and Ye, Qixiang",2021.0,,,,,Multiple instance active learning for object detection,Multiple instance active learning for object detection,http://arxiv.org/pdf/2104.02324v1,"Despite the substantial progress of active learning for image recognition, there still lacks an instance-level active learning method specified for object detection. In this paper, we propose Multiple Instance Active Object Detection (MI-AOD), to select the most informative images for detector training by observing instance-level uncertainty. MI-AOD defines an instance uncertainty learning module, which leverages the discrepancy of two adversarial instance classifiers trained on the labeled set to predict instance uncertainty of the unlabeled set. MI-AOD treats unlabeled images as instance bags and feature anchors in images as instances, and estimates the image uncertainty by re-weighting instances in a multiple instance learning (MIL) fashion. Iterative instance uncertainty learning and re-weighting facilitate suppressing noisy instances, toward bridging the gap between instance uncertainty and image-level uncertainty. Experiments validate that MI-AOD sets a solid baseline for instance-level active learning. On commonly used object detection datasets, MI-AOD outperforms state-of-the-art methods with significant margins, particularly when the labeled sets are small. Code is available at https://github.com/yuantn/MI-AOD." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,fu2021agreement,\cite{fu2021agreement},Agreement-Discrepancy-Selection: Active learning with progressive distribution alignment,,,True,False,"Fu, Mengying and Yuan, Tianning and Wan, Fang and Xu, Songcen and Ye, Qixiang",2021.0,,,,,Agreement-Discrepancy-Selection: Active learning with progressive distribution alignment,[PDF] Selection: Active Learning with Progressive Distribution Alignment,https://cdn.aaai.org/ojs/16915/16915-13-20409-1-2-20210518.pdf,"In this paper, we propose an agreement-discrepancy-selection (ADS) approach, and target at unifying distribution alignment with sample selection by." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,konevcny2015federated,\cite{konevcny2015federated},Federated Optimization:Distributed Optimization Beyond the Datacenter,http://arxiv.org/abs/1511.03575v1,"We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large number of \nodes, but the goal remains to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of utmost importance. A motivating example for federated optimization arises when we keep the training data locally on users' mobile devices rather than logging it to a data center for training. Instead, the mobile devices are used as nodes performing computation on their local data in order to update a global model. We suppose that we have an extremely large number of devices in our network, each of which has only a tiny fraction of data available totally; in particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, we assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results. This work also sets a path for future research needed in the context of federated optimization.",True,True,"Kone{\v{c}}n{\`y}, Jakub and McMahan, Brendan and Ramage, Daniel",2015.0,,,,,Federated Optimization:Distributed Optimization Beyond the Datacenter,Federated Optimization:Distributed Optimization Beyond the Datacenter,http://arxiv.org/pdf/1511.03575v1,"We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large number of \nodes, but the goal remains to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of utmost importance. A motivating example for federated optimization arises when we keep the training data locally on users' mobile devices rather than logging it to a data center for training. Instead, the mobile devices are used as nodes performing computation on their local data in order to update a global model. We suppose that we have an extremely large number of devices in our network, each of which has only a tiny fraction of data available totally; in particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, we assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results. This work also sets a path for future research needed in the context of federated optimization." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,mcmahan2017communication,\cite{mcmahan2017communication},"Communication-Efficient Learning of Deep Networks from Decentralized Data",http://arxiv.org/abs/1602.05629v4,"Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent.",True,True,"McMahan, Brendan and Moore, Eider and Ramage, Daniel and Hampson, Seth and y Arcas, Blaise Aguera",2017.0,,,,,"Communication-Efficient Learning of Deep Networks from Decentralized Data",[1602.05629] Communication-Efficient Learning of Deep Networks ...,https://arxiv.org/abs/1602.05629,"Communication-Efficient Learning of Deep Networks from Decentralized Data. Authors:H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson," "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,ahmed2020active,\cite{ahmed2020active},Active learning based federated learning for waste and natural disaster image classification,,,True,False,"Ahmed, Lulwa and Ahmad, Kashif and Said, Naina and Qolomany, Basheer and Qadir, Junaid and Al-Fuqaha, Ala",2020.0,,,,IEEE Access,Active learning based federated learning for waste and natural disaster image classification,Active Learning Based Federated Learning for Waste and ...,https://ieeexplore.ieee.org/document/9261337/,by L Ahmed · 2020 · Cited by 96 — Active Learning (AL) provides an alternative solution allowing a Machine Learning (ML) model to automatically choose and label the data from "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,mohammad21flare,\cite{mohammad21flare},"{FLARE:} Federated active learning assisted by naming for responding to emergencies",,,True,False,"Mittal, Viyom and Jahanian, Mohammad and Ramakrishnan, K. K.",2021.0,,,,,"{FLARE:} Federated active learning assisted by naming for responding to emergencies",FLARE: Federated Active Learning Assisted by Naming for ...,https://ieeexplore.ieee.org/document/9651978,DEMO: FLARE: Federated Active Learning Assisted by Naming for Responding to Emergencies | IEEE Conference Publication | IEEE Xplore * IEEE.org * IEEE _Xplore_ * IEEE SA * IEEE Spectrum Image 1: IEEE Xplore logo - Link to home Image 2: IEEE logo - Link to IEEE main site homepage Conferences>2021 IEEE 29th International ... Publisher: IEEE **Published in:**2021 IEEE 29th International Conference on Network Protocols (ICNP) Publisher: IEEE Image 4: Contact IEEE to Subscribe IEEE Personal Account About IEEE _Xplore_ | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | IEEE Privacy Policy ### IEEE Account * About IEEE _Xplore_ "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,jia2019active,\cite{jia2019active},Active Learning Solution on Distributed Edge Computing,http://arxiv.org/abs/1906.10718v1,"Industry 4.0 becomes possible through the convergence between Operational and Information Technologies. All the requirements to realize the convergence is integrated on the Fog Platform. Fog Platform is introduced between the cloud server and edge devices when the unprecedented generation of data causes the burden of the cloud server, leading the ineligible latency. In this new paradigm, we divide the computation tasks and push it down to edge devices. Furthermore, local computing (at edge side) may improve privacy and trust. To address these problems, we present a new method, in which we decompose the data aggregation and processing, by dividing them between edge devices and fog nodes intelligently. We apply active learning on edge devices; and federated learning on the fog node which significantly reduces the data samples to train the model as well as the communication cost. To show the effectiveness of the proposed method, we implemented and evaluated its performance for an image classification task. In addition, we consider two settings: massively distributed and non-massively distributed and offer the corresponding solutions.",True,True,Jia Qian and Sayantan Sengupta and Lars Kai Hansen,2019.0,,,,,Active Learning Solution on Distributed Edge Computing,Active Learning Solution on Distributed Edge Computing,http://arxiv.org/pdf/1906.10718v1,"Industry 4.0 becomes possible through the convergence between Operational and Information Technologies. All the requirements to realize the convergence is integrated on the Fog Platform. Fog Platform is introduced between the cloud server and edge devices when the unprecedented generation of data causes the burden of the cloud server, leading the ineligible latency. In this new paradigm, we divide the computation tasks and push it down to edge devices. Furthermore, local computing (at edge side) may improve privacy and trust. To address these problems, we present a new method, in which we decompose the data aggregation and processing, by dividing them between edge devices and fog nodes intelligently. We apply active learning on edge devices; and federated learning on the fog node which significantly reduces the data samples to train the model as well as the communication cost. To show the effectiveness of the proposed method, we implemented and evaluated its performance for an image classification task. In addition, we consider two settings: massively distributed and non-massively distributed and offer the corresponding solutions." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,nicolas2020combine,\cite{nicolas2020combine},"Combining Federated and Active Learning for Communication-efficient Distributed Failure Prediction in Aeronautics",http://arxiv.org/abs/2001.07504v1,"Machine Learning has proven useful in the recent years as a way to achieve failure prediction for industrial systems. However, the high computational resources necessary to run learning algorithms are an obstacle to its widespread application. The sub-field of Distributed Learning offers a solution to this problem by enabling the use of remote resources but at the expense of introducing communication costs in the application that are not always acceptable. In this paper, we propose a distributed learning approach able to optimize the use of computational and communication resources to achieve excellent learning model performances through a centralized architecture. To achieve this, we present a new centralized distributed learning algorithm that relies on the learning paradigms of Active Learning and Federated Learning to offer a communication-efficient method that offers guarantees of model precision on both the clients and the central server. We evaluate this method on a public benchmark and show that its performances in terms of precision are very close to state-of-the-art performance level of non-distributed learning despite additional constraints.",True,True,Nicolas Aussel and Sophie Chabridon and Yohan Petetin,2020.0,,,,,"Combining Federated and Active Learning for Communication-efficient Distributed Failure Prediction in Aeronautics",Combining Federated and Active Learning for Communication ...,https://www.researchgate.net/publication/338737955_Combining_Federated_and_Active_Learning_for_Communication-efficient_Distributed_Failure_Prediction_in_Aeronautics,"In this paper, we propose a distributed learning approach able to optimize the use of computational and communication resources to achieve" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,jin2022federated,\cite{jin2022federated},"Federated Active Learning (F-AL): an Efficient Annotation Strategy for Federated Learning",http://arxiv.org/abs/2202.00195v2,"Federated learning (FL) has been intensively investigated in terms of communication efficiency, privacy, and fairness. However, efficient annotation, which is a pain point in real-world FL applications, is less studied. In this project, we propose to apply active learning (AL) and sampling strategy into the FL framework to reduce the annotation workload. We expect that the AL and FL can improve the performance of each other complementarily. In our proposed federated active learning (F-AL) method, the clients collaboratively implement the AL to obtain the instances which are considered as informative to FL in a distributed optimization manner. We compare the test accuracies of the global FL models using the conventional random sampling strategy, client-level separate AL (S-AL), and the proposed F-AL. We empirically demonstrate that the F-AL outperforms baseline methods in image classification tasks.",True,True,Jin{-}Hyun Ahn and Kyung Sang Kim and Jeongwan Koh and Quanzheng Li,2022.0,,,,,"Federated Active Learning (F-AL): an Efficient Annotation Strategy for Federated Learning",Federated Active Learning (F-AL): an Efficient Annotation Strategy ...,https://arxiv.org/abs/2202.00195,"In this project, we propose to apply active learning (AL) and sampling strategy into the FL framework to reduce the annotation workload." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,10184650,\cite{10184650},Distribution-Regularized Federated Learning on Non-IID Data,,,True,False,"Wang, Yansheng and Tong, Yongxin and Zhou, Zimu and Zhang, Ruisheng and Pan, Sinno Jialin and Fan, Lixin and Yang, Qiang",2023.0,,,,,Distribution-Regularized Federated Learning on Non-IID Data,Distribution-Regularized Federated Learning on Non-IID Data,https://ieeexplore.ieee.org/document/10184650,We propose a distribution regularization for FL on non-IID data such that the discrepancy of data distributions between clients is reduced. "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,jia2020Robust,\cite{jia2020Robust},Robustness analytics to data heterogeneity in edge computing,http://arxiv.org/abs/2002.05038v2,"Federated Learning is a framework that jointly trains a model \textit{with} complete knowledge on a remotely placed centralized server, but \textit{without} the requirement of accessing the data stored in distributed machines. Some work assumes that the data generated from edge devices are identically and independently sampled from a common population distribution. However, such ideal sampling may not be realistic in many contexts. Also, models based on intrinsic agency, such as active sampling schemes, may lead to highly biased sampling. So an imminent question is how robust Federated Learning is to biased sampling? In this work\footnote{\url{https://github.com/jiaqian/robustness_of_FL}}, we experimentally investigate two such scenarios. First, we study a centralized classifier aggregated from a collection of local classifiers trained with data having categorical heterogeneity. Second, we study a classifier aggregated from a collection of local classifiers trained by data through active sampling at the edge. We present evidence in both scenarios that Federated Learning is robust to data heterogeneity when local training iterations and communication frequency are appropriately chosen.",True,True,Jia Qian and Lars Kai Hansen and Xenofon Fafoutis and Prayag Tiwari and Hari Mohan Pandey,2020.0,,,,Comput. Commun.,Robustness analytics to data heterogeneity in edge computing,Robustness analytics to data heterogeneity in edge computing,http://arxiv.org/pdf/2002.05038v2,"Federated Learning is a framework that jointly trains a model \textit{with} complete knowledge on a remotely placed centralized server, but \textit{without} the requirement of accessing the data stored in distributed machines. Some work assumes that the data generated from edge devices are identically and independently sampled from a common population distribution. However, such ideal sampling may not be realistic in many contexts. Also, models based on intrinsic agency, such as active sampling schemes, may lead to highly biased sampling. So an imminent question is how robust Federated Learning is to biased sampling? In this work\footnote{\url{https://github.com/jiaqian/robustness_of_FL}}, we experimentally investigate two such scenarios. First, we study a centralized classifier aggregated from a collection of local classifiers trained with data having categorical heterogeneity. Second, we study a classifier aggregated from a collection of local classifiers trained by data through active sampling at the edge. We present evidence in both scenarios that Federated Learning is robust to data heterogeneity when local training iterations and communication frequency are appropriately chosen." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,cao2022knowledgeaware,\cite{cao2022knowledgeaware},Knowledge-Aware Federated Active Learning with Non-IID Data,http://arxiv.org/abs/2211.13579v3,"Federated learning enables multiple decentralized clients to learn collaboratively without sharing the local training data. However, the expensive annotation cost to acquire data labels on local clients remains an obstacle in utilizing local data. In this paper, we propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget while protecting data privacy in a decentralized learning way. The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the asynchronous local clients. This becomes even more significant when data is distributed non-IID across local clients. To address the aforementioned challenge, we propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU). KSAS is a novel active sampling method tailored for the federated active learning problem. It deals with the mismatch challenge by sampling actively based on the discrepancies between local and global models. KSAS intensifies specialized knowledge in local clients, ensuring the sampled data to be informative for both the local clients and the global model. KCFU, in the meantime, deals with the client heterogeneity caused by limited data and non-IID data distributions. It compensates for each client's ability in weak classes by the assistance of the global model. Extensive experiments and analyses are conducted to show the superiority of KSAS over the state-of-the-art active learning methods and the efficiency of KCFU under the federated active learning framework.",True,True,Yu-Tong Cao and Jingya Wang and Ye Shi and Baosheng Yu and Dacheng Tao,2023.0,,,,,Knowledge-Aware Federated Active Learning with Non-IID Data,[PDF] Knowledge-Aware Federated Active Learning with Non-IID Data,https://openaccess.thecvf.com/content/ICCV2023/papers/Cao_Knowledge-Aware_Federated_Active_Learning_with_Non-IID_Data_ICCV_2023_paper.pdf,This paper devised a Knowledge-Aware Federated Active Learning (KAFAL) method for federated active learning with non-IID data. KAFAL computes the "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,kim2023rethinking,\cite{kim2023rethinking},Re-thinking Federated Active Learning based on Inter-class Diversity,http://arxiv.org/abs/2303.12317v1,"Although federated learning has made awe-inspiring advances, most studies have assumed that the client's data are fully labeled. However, in a real-world scenario, every client may have a significant amount of unlabeled instances. Among the various approaches to utilizing unlabeled data, a federated active learning framework has emerged as a promising solution. In the decentralized setting, there are two types of available query selector models, namely 'global' and 'local-only' models, but little literature discusses their performance dominance and its causes. In this work, we first demonstrate that the superiority of two selector models depends on the global and local inter-class diversity. Furthermore, we observe that the global and local-only models are the keys to resolving the imbalance of each side. Based on our findings, we propose LoGo, a FAL sampling strategy robust to varying local heterogeneity levels and global imbalance ratio, that integrates both models by two steps of active selection scheme. LoGo consistently outperforms six active learning strategies in the total number of 38 experimental settings.",True,True,SangMook Kim and Sangmin Bae and Hwanjun Song and Se-Young Yun,2023.0,,,,,Re-thinking Federated Active Learning based on Inter-class Diversity,[PDF] Re-Thinking Federated Active Learning Based on Inter-Class Diversity,https://openaccess.thecvf.com/content/CVPR2023/papers/Kim_Re-Thinking_Federated_Active_Learning_Based_on_Inter-Class_Diversity_CVPR_2023_paper.pdf,"Hence, in the FAL framework, the active selection algo- rithm has to ensure inter-class diversity from both local and global perspectives. Second, there are two" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,goetz2019active,\cite{goetz2019active},Active Federated Learning,http://arxiv.org/abs/1909.12641v1,"Federated Learning allows for population level models to be trained without centralizing client data by transmitting the global model to clients, calculating gradients locally, then averaging the gradients. Downloading models and uploading gradients uses the client's bandwidth, so minimizing these transmission costs is important. The data on each client is highly variable, so the benefit of training on different clients may differ dramatically. To exploit this we propose Active Federated Learning, where in each round clients are selected not uniformly at random, but with a probability conditioned on the current model and the data on the client to maximize efficiency. We propose a cheap, simple and intuitive sampling scheme which reduces the number of required training iterations by 20-70% while maintaining the same model accuracy, and which mimics well known resampling techniques under certain conditions.",True,True,"Goetz, Jack and Malik, Kshitiz and Bui, Duc and Moon, Seungwhan and Liu, Honglei and Kumar, Anuj",2019.0,,,,,Active Federated Learning,Active Federated Learning,http://arxiv.org/pdf/1909.12641v1,"Federated Learning allows for population level models to be trained without centralizing client data by transmitting the global model to clients, calculating gradients locally, then averaging the gradients. Downloading models and uploading gradients uses the client's bandwidth, so minimizing these transmission costs is important. The data on each client is highly variable, so the benefit of training on different clients may differ dramatically. To exploit this we propose Active Federated Learning, where in each round clients are selected not uniformly at random, but with a probability conditioned on the current model and the data on the client to maximize efficiency. We propose a cheap, simple and intuitive sampling scheme which reduces the number of required training iterations by 20-70% while maintaining the same model accuracy, and which mimics well known resampling techniques under certain conditions." "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,li2021sample,\cite{li2021sample},Sample-level data selection for federated learning,,,True,False,"Li, Anran and Zhang, Lan and Tan, Juntao and Qin, Yaxuan and Wang, Junhao and Li, Xiang-Yang",2021.0,,,,,Sample-level data selection for federated learning,Sample-level Data Selection for Federated Learning - IEEE Xplore,https://ieeexplore.ieee.org/iel7/9488422/9488423/09488723.pdf,"In FL systems, the selection of training samples has a significant impact on model performances, e.g., selecting participants whose datasets have erroneous" "CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning",2504.17448v1,shin2022sample,\cite{shin2022sample},Sample selection with deadline control for efficient federated learning on heterogeneous clients,,,True,False,"Shin, Jaemin and Li, Yuanchun and Liu, Yunxin and Lee, Sung-Ju",2022.0,,,,,Sample selection with deadline control for efficient federated learning on heterogeneous clients,Sample Selection with Deadline Control for Efficient Federated ... - dblp,https://dblp.org/rec/journals/corr/abs-2201-01601,Bibliographic details on Sample Selection with Deadline Control for Efficient Federated Learning on Heterogeneous Clients. "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,wang2024must,\cite{wang2024must},"MUST: An Effective and Scalable Framework for Multimodal Search of Target Modality",http://arxiv.org/abs/2312.06397v1,"We investigate the problem of multimodal search of target modality, where the task involves enhancing a query in a specific target modality by integrating information from auxiliary modalities. The goal is to retrieve relevant objects whose contents in the target modality match the specified multimodal query. The paper first introduces two baseline approaches that integrate techniques from the Database, Information Retrieval, and Computer Vision communities. These baselines either merge the results of separate vector searches for each modality or perform a single-channel vector search by fusing all modalities. However, both baselines have limitations in terms of efficiency and accuracy as they fail to adequately consider the varying importance of fusing information across modalities. To overcome these limitations, the paper proposes a novel framework, called MUST. Our framework employs a hybrid fusion mechanism, combining different modalities at multiple stages. Notably, we leverage vector weight learning to determine the importance of each modality, thereby enhancing the accuracy of joint similarity measurement. Additionally, the proposed framework utilizes a fused proximity graph index, enabling efficient joint search for multimodal queries. MUST offers several other advantageous properties, including pluggable design to integrate any advanced embedding techniques, user flexibility to customize weight preferences, and modularized index construction. Extensive experiments on real-world datasets demonstrate the superiority of MUST over the baselines in terms of both search accuracy and efficiency. Our framework achieves over 10x faster search times while attaining an average of 93% higher accuracy. Furthermore, MUST exhibits scalability to datasets containing more than 10 million data elements.",True,True,"Wang, Mengzhao and Ke, Xiangyu and Xu, Xiaoliang and Chen, Lu and Gao, Yunjun and Huang, Pinpin and Zhu, Runkai",2024.0,,,,,"MUST: An Effective and Scalable Framework for Multimodal Search of Target Modality",An Effective and Scalable Framework for Multimodal ...,https://ieeexplore.ieee.org/iel8/10597630/10597390/10597872.pdf,"by M Wang · 2024 · Cited by 12 — MUST is a framework for multimodal search of target modality, enhancing a query by integrating information from auxiliary modalities. It uses a hybrid fusion" "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,yu2014large,\cite{yu2014large},Large-Scale Multi-Label Learning with Incomplete Label Assignments,http://arxiv.org/abs/1407.1538v1,"Multi-label learning deals with the classification problems where each instance can be assigned with multiple labels simultaneously. Conventional multi-label learning approaches mainly focus on exploiting label correlations. It is usually assumed, explicitly or implicitly, that the label sets for training instances are fully labeled without any missing labels. However, in many real-world multi-label datasets, the label assignments for training instances can be incomplete. Some ground-truth labels can be missed by the labeler from the label set. This problem is especially typical when the number instances is very large, and the labeling cost is very high, which makes it almost impossible to get a fully labeled training set. In this paper, we study the problem of large-scale multi-label learning with incomplete label assignments. We propose an approach, called MPU, based upon positive and unlabeled stochastic gradient descent and stacked models. Unlike prior works, our method can effectively and efficiently consider missing labels and label correlations simultaneously, and is very scalable, that has linear time complexities over the size of the data. Extensive experiments on two real-world multi-label datasets show that our MPU model consistently outperform other commonly-used baselines.",True,True,"Yu, Hsiang-Fu and Jain, Prateek and Kar, Purushottam and Dhillon, Inderjit",2014.0,,,,,Large-Scale Multi-Label Learning with Incomplete Label Assignments,Large-Scale Multi-Label Learning with Incomplete Label Assignments,https://arxiv.org/abs/1407.1538,"In this paper, we study the problem of large-scale multi-label learning with incomplete label assignments. We propose an approach, called MPU," "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,xu2020product,\cite{xu2020product},Product Knowledge Graph Embedding for E-commerce,http://arxiv.org/abs/1911.12481v1,"In this paper, we propose a new product knowledge graph (PKG) embedding approach for learning the intrinsic product relations as product knowledge for e-commerce. We define the key entities and summarize the pivotal product relations that are critical for general e-commerce applications including marketing, advertisement, search ranking and recommendation. We first provide a comprehensive comparison between PKG and ordinary knowledge graph (KG) and then illustrate why KG embedding methods are not suitable for PKG learning. We construct a self-attention-enhanced distributed representation learning model for learning PKG embeddings from raw customer activity data in an end-to-end fashion. We design an effective multi-task learning schema to fully leverage the multi-modal e-commerce data. The Poincare embedding is also employed to handle complex entity structures. We use a real-world dataset from grocery.walmart.com to evaluate the performances on knowledge completion, search ranking and recommendation. The proposed approach compares favourably to baselines in knowledge completion and downstream tasks.",True,True,"Xu, Da and Ruan, Chuanwei and Korpeoglu, Evren and Kumar, Sushant and Achan, Kannan",2020.0,,,,,Product Knowledge Graph Embedding for E-commerce,Product Knowledge Graph Embedding for E-commerce,http://arxiv.org/pdf/1911.12481v1,"In this paper, we propose a new product knowledge graph (PKG) embedding approach for learning the intrinsic product relations as product knowledge for e-commerce. We define the key entities and summarize the pivotal product relations that are critical for general e-commerce applications including marketing, advertisement, search ranking and recommendation. We first provide a comprehensive comparison between PKG and ordinary knowledge graph (KG) and then illustrate why KG embedding methods are not suitable for PKG learning. We construct a self-attention-enhanced distributed representation learning model for learning PKG embeddings from raw customer activity data in an end-to-end fashion. We design an effective multi-task learning schema to fully leverage the multi-modal e-commerce data. The Poincare embedding is also employed to handle complex entity structures. We use a real-world dataset from grocery.walmart.com to evaluate the performances on knowledge completion, search ranking and recommendation. The proposed approach compares favourably to baselines in knowledge completion and downstream tasks." "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,asai2023retrieval,\cite{asai2023retrieval},Retrieval-based language models and applications,,,True,False,"Asai, Akari and Min, Sewon and Zhong, Zexuan and Chen, Danqi",2023.0,,,,,Retrieval-based language models and applications,ACL 2023 Tutorial:Retrieval-based Language Models ... - YouTube,https://www.youtube.com/watch?v=BsxxjMPu-YM,This content isn't available. ACL 2023 Tutorial:Retrieval-based Language Models and Applications. 2.1K views · 1 year ago ...more. 哈哈大笑和哈. "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,huang2020embedding,\cite{huang2020embedding},Embedding-based Retrieval in Facebook Search,http://arxiv.org/abs/2006.11632v2,"Search in social networks such as Facebook poses different challenges than in classical web search: besides the query text, it is important to take into account the searcher's context to provide relevant results. Their social graph is an integral part of this context and is a unique aspect of Facebook search. While embedding-based retrieval (EBR) has been applied in eb search engines for years, Facebook search was still mainly based on a Boolean matching model. In this paper, we discuss the techniques for applying EBR to a Facebook Search system. We introduce the unified embedding framework developed to model semantic embeddings for personalized search, and the system to serve embedding-based retrieval in a typical search system based on an inverted index. We discuss various tricks and experiences on end-to-end optimization of the whole system, including ANN parameter tuning and full-stack optimization. Finally, we present our progress on two selected advanced topics about modeling. We evaluated EBR on verticals for Facebook Search with significant metrics gains observed in online A/B experiments. We believe this paper will provide useful insights and experiences to help people on developing embedding-based retrieval systems in search engines.",True,True,"Huang, Jui-Ting and Sharma, Ashish and Sun, Shuying and Xia, Li and Zhang, David and Pronin, Philip and Padmanabhan, Janani and Ottaviano, Giuseppe and Yang, Linjun",2020.0,,,,,Embedding-based Retrieval in Facebook Search,Embedding-based Retrieval in Facebook Search,https://dl.acm.org/doi/10.1145/3394486.3403305,"In this paper, we discuss the techniques for applying EBR to a Facebook Search system. We introduce the unified embedding framework developed to model semantic" "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,radford2021learning,\cite{radford2021learning},Learning Transferable Visual Models From Natural Language Supervision,http://arxiv.org/abs/2103.00020v1,"State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.",True,True,"Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and others",2021.0,,,,,Learning Transferable Visual Models From Natural Language Supervision,Learning Transferable Visual Models From Natural Language Supervision,http://arxiv.org/pdf/2103.00020v1,"State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP." "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,wang2017survey,\cite{wang2017survey},A Survey on Learning to Hash,http://arxiv.org/abs/1606.00185v2,"Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics.",True,True,"Wang, Jingdong and Zhang, Ting and Sebe, Nicu and Shen, Heng Tao and others",2017.0,,,,TPAMI,A Survey on Learning to Hash,A Survey on Learning to Hash,http://arxiv.org/pdf/1606.00185v2,"Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics." "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,wei2024det,\cite{wei2024det},Det-lsh: a locality-sensitive hashing scheme with dynamic encoding tree for approximate nearest neighbor search,,,True,False,"Wei, Jiuqi and Peng, Botao and Lee, Xiaodong and Palpanas, Themis",2024.0,,,,arXiv preprint arXiv:2406.10938,Det-lsh: a locality-sensitive hashing scheme with dynamic encoding tree for approximate nearest neighbor search,DET-LSH: A Locality-Sensitive Hashing Scheme with Dynamic ...,https://www.researchgate.net/publication/382927854_DET-LSH_A_Locality-Sensitive_Hashing_Scheme_with_Dynamic_Encoding_Tree_for_Approximate_Nearest_Neighbor_Search,"Based on DE-Tree, we propose a novel LSH scheme called DET-LSH. DET-LSH adopts a novel query strategy, which performs range queries in multiple independent" "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,shrivastava2014asymmetric,\cite{shrivastava2014asymmetric},"Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS)",http://arxiv.org/abs/1405.5869v1,"We present the first provably sublinear time algorithm for approximate \emph{Maximum Inner Product Search} (MIPS). Our proposal is also the first hashing algorithm for searching with (un-normalized) inner product as the underlying similarity measure. Finding hashing schemes for MIPS was considered hard. We formally show that the existing Locality Sensitive Hashing (LSH) framework is insufficient for solving MIPS, and then we extend the existing LSH framework to allow asymmetric hashing schemes. Our proposal is based on an interesting mathematical phenomenon in which inner products, after independent asymmetric transformations, can be converted into the problem of approximate near neighbor search. This key observation makes efficient sublinear hashing scheme for MIPS possible. In the extended asymmetric LSH (ALSH) framework, we provide an explicit construction of provably fast hashing scheme for MIPS. The proposed construction and the extended LSH framework could be of independent theoretical interest. Our proposed algorithm is simple and easy to implement. We evaluate the method, for retrieving inner products, in the collaborative filtering task of item recommendations on Netflix and Movielens datasets.",True,True,"Shrivastava, Anshumali and Li, Ping",2014.0,,,,,"Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS)",[1405.5869] Asymmetric LSH (ALSH) for Sublinear Time ...,https://arxiv.org/abs/1405.5869,by A Shrivastava · 2014 · Cited by 612 — Abstract:We present the first provably sublinear time algorithm for approximate \emph{Maximum Inner Product Search} (MIPS). "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,shrivastava2015improved,\cite{shrivastava2015improved},"Improved Asymmetric Locality Sensitive Hashing (ALSH) for Maximum Inner Product Search (MIPS)",http://arxiv.org/abs/1410.5410v2,"Recently it was shown that the problem of Maximum Inner Product Search (MIPS) is efficient and it admits provably sub-linear hashing algorithms. Asymmetric transformations before hashing were the key in solving MIPS which was otherwise hard. In the prior work, the authors use asymmetric transformations which convert the problem of approximate MIPS into the problem of approximate near neighbor search which can be efficiently solved using hashing. In this work, we provide a different transformation which converts the problem of approximate MIPS into the problem of approximate cosine similarity search which can be efficiently solved using signed random projections. Theoretical analysis show that the new scheme is significantly better than the original scheme for MIPS. Experimental evaluations strongly support the theoretical findings.",True,True,"Shrivastava, Anshumali and Li, Ping",2015.0,,,,,"Improved Asymmetric Locality Sensitive Hashing (ALSH) for Maximum Inner Product Search (MIPS)",[1410.5410] Improved Asymmetric Locality Sensitive Hashing (ALSH ...,https://arxiv.org/abs/1410.5410,Recently it was shown that the problem of Maximum Inner Product Search (MIPS) is efficient and it admits provably sub-linear hashing algorithms. "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,bachrach2014speeding,\cite{bachrach2014speeding},Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces,,,True,False,"Bachrach, Yoram and Finkelstein, Yehuda and Gilad-Bachrach, Ran and Katzir, Liran and Koenigstein, Noam and Nice, Nir and Paquet, Ulrich",2014.0,,,,,Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces,[PDF] Speeding Up the Xbox Recommender System Using a Euclidean ...,https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/XboxInnerProduct.pdf,"The paper speeds up Xbox recommendations by transforming the inner product problem to a Euclidean space, using a PCA-Tree data structure and neighborhood" "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,yan2018norm,\cite{yan2018norm},Norm-Ranging LSH for Maximum Inner Product Search,http://arxiv.org/abs/1809.08782v2,"Neyshabur and Srebro proposed Simple-LSH, which is the state-of-the-art hashing method for maximum inner product search (MIPS) with performance guarantee. We found that the performance of Simple-LSH, in both theory and practice, suffers from long tails in the 2-norm distribution of real datasets. We propose Norm-ranging LSH, which addresses the excessive normalization problem caused by long tails in Simple-LSH by partitioning a dataset into multiple sub-datasets and building a hash index for each sub-dataset independently. We prove that Norm-ranging LSH has lower query time complexity than Simple-LSH. We also show that the idea of partitioning the dataset can improve other hashing based methods for MIPS. To support efficient query processing on the hash indexes of the sub-datasets, a novel similarity metric is formulated. Experiments show that Norm-ranging LSH achieves an order of magnitude speedup over Simple-LSH for the same recall, thus significantly benefiting applications that involve MIPS.",True,True,"Yan, Xiao and Li, Jinfeng and Dai, Xinyan and Chen, Hongzhi and Cheng, James",2018.0,,,,,Norm-Ranging LSH for Maximum Inner Product Search,Norm-Ranging LSH for Maximum Inner Product Search,https://arxiv.org/abs/1809.08782,"by X Yan · 2018 · Cited by 70 — We propose Norm-ranging LSH, which addresses the excessive normalization problem caused by long tails in Simple-LSH by partitioning a dataset" "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,neyshabur2015symmetric,\cite{neyshabur2015symmetric},On Symmetric and Asymmetric LSHs for Inner Product Search,http://arxiv.org/abs/1410.5518v3,"We consider the problem of designing locality sensitive hashes (LSH) for inner product similarity, and of the power of asymmetric hashes in this context. Shrivastava and Li argue that there is no symmetric LSH for the problem and propose an asymmetric LSH based on different mappings for query and database points. However, we show there does exist a simple symmetric LSH that enjoys stronger guarantees and better empirical performance than the asymmetric LSH they suggest. We also show a variant of the settings where asymmetry is in-fact needed, but there a different asymmetric LSH is required.",True,True,"Neyshabur, Behnam and Srebro, Nathan",2015.0,,,,,On Symmetric and Asymmetric LSHs for Inner Product Search,On Symmetric and Asymmetric LSHs for Inner Product Search,http://arxiv.org/pdf/1410.5518v3,"We consider the problem of designing locality sensitive hashes (LSH) for inner product similarity, and of the power of asymmetric hashes in this context. Shrivastava and Li argue that there is no symmetric LSH for the problem and propose an asymmetric LSH based on different mappings for query and database points. However, we show there does exist a simple symmetric LSH that enjoys stronger guarantees and better empirical performance than the asymmetric LSH they suggest. We also show a variant of the settings where asymmetry is in-fact needed, but there a different asymmetric LSH is required." "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,zhao2023fargo,\cite{zhao2023fargo},FARGO: Fast maximum inner product search via global multi-probing,,,True,False,"Zhao, Xi and Zheng, Bolong and Yi, Xiaomeng and Luan, Xiaofan and Xie, Charles and Zhou, Xiaofang and Jensen, Christian S",2023.0,,,,PVLDB,FARGO: Fast maximum inner product search via global multi-probing,[PDF] FARGO: Fast Maximum Inner Product Search via Global Multi-Probing,https://www.vldb.org/pvldb/vol16/p1100-zheng.pdf,"FARGO is a fast search framework for MIPS using global multi-probing (GMP) to examine high-quality candidates, unlike Multi-Probe." "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,song2021promips,\cite{song2021promips},ProMIPS: Efficient high-dimensional C-approximate maximum inner product search with a lightweight index,,,True,False,"Song, Yang and Gu, Yu and Zhang, Rui and Yu, Ge",2021.0,,,,,ProMIPS: Efficient high-dimensional C-approximate maximum inner product search with a lightweight index,ProMIPS: Efficient High-Dimensional c-Approximate Maximum Inner ...,https://arxiv.org/abs/2104.04406,"In this paper, we relax the guarantee of accuracy for efficiency and propose an efficient method for c-Approximate Maximum Inner Product (c-AMIP) search with a" "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,ma2024reconsidering,\cite{ma2024reconsidering},Reconsidering Tree based Methods for k-Maximum Inner-Product Search: The LRUS-CoverTree,,,True,False,"Ma, Hengzhao and Li, Jianzhong and Zhang, Yong",2024.0,,,,,Reconsidering Tree based Methods for k-Maximum Inner-Product Search: The LRUS-CoverTree,Reconsidering Tree based Methods for k-Maximum Inner- ...,https://ieeexplore.ieee.org/document/10598031/,by H Ma · 2024 · Cited by 4 — The new k- Maximum Inner-Product Search algorithm based on LRUS-CoverTree outperforms the state-of-the-art locality sensitive hashing based methods. "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,dai2020norm,\cite{dai2020norm},"Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search",http://arxiv.org/abs/1911.04654v2,"Vector quantization (VQ) techniques are widely used in similarity search for data compression, fast metric computation and etc. Originally designed for Euclidean distance, existing VQ techniques (e.g., PQ, AQ) explicitly or implicitly minimize the quantization error. In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error. We show that quantization errors in norm have much higher influence on inner products than quantization errors in direction, and small quantization error does not necessarily lead to good performance in maximum inner product search (MIPS). Based on this observation, we propose norm-explicit quantization (NEQ) --- a general paradigm that improves existing VQ techniques for MIPS. NEQ quantizes the norms of items in a dataset explicitly to reduce errors in norm, which is crucial for MIPS. For the direction vectors, NEQ can simply reuse an existing VQ technique to quantize them without modification. We conducted extensive experiments on a variety of datasets and parameter configurations. The experimental results show that NEQ improves the performance of various VQ techniques for MIPS, including PQ, OPQ, RQ and AQ.",True,True,"Dai, Xinyan and Yan, Xiao and Ng, Kelvin KW and Liu, Jiu and Cheng, James",2020.0,,,,,"Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search",Improving Vector Quantization for Maximum Inner Product Search,https://arxiv.org/abs/1911.04654,We propose norm-explicit quantization (NEQ) --- a general paradigm that improves existing VQ techniques for MIPS. "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,guo2020accelerating,\cite{guo2020accelerating},Accelerating Large-Scale Inference with Anisotropic Vector Quantization,http://arxiv.org/abs/1908.10396v5,"Quantization based techniques are the current state-of-the-art for scaling maximum inner product search to massive databases. Traditional approaches to quantization aim to minimize the reconstruction error of the database points. Based on the observation that for a given query, the database points that have the largest inner products are more relevant, we develop a family of anisotropic quantization loss functions. Under natural statistical assumptions, we show that quantization with these loss functions leads to a new variant of vector quantization that more greatly penalizes the parallel component of a datapoint's residual relative to its orthogonal component. The proposed approach achieves state-of-the-art results on the public benchmarks available at \url{ann-benchmarks.com}.",True,True,"Guo, Ruiqi and Sun, Philip and Lindgren, Erik and Geng, Quan and Simcha, David and Chern, Felix and Kumar, Sanjiv",2020.0,,,,,Accelerating Large-Scale Inference with Anisotropic Vector Quantization,Accelerating Large-Scale Inference with Anisotropic Vector ...,https://arxiv.org/abs/1908.10396,"> cs > arXiv:1908.10396 arXiv:1908.10396 (cs) Authors:Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, Sanjiv Kumar View a PDF of the paper titled Accelerating Large-Scale Inference with Anisotropic Vector Quantization, by Ruiqi Guo and 6 other authors Subjects:Machine Learning (cs.LG); Machine Learning (stat.ML)Cite as:arXiv:1908.10396 [cs.LG] (or arXiv:1908.10396v5 [cs.LG] for this version) https://doi.org/10.48550/arXiv.1908.10396Focus to learn morearXiv-issued DOI via DataCite View a PDF of the paper titled Accelerating Large-Scale Inference with Anisotropic Vector Quantization, by Ruiqi Guo and 6 other authors Bibliographic Explorer Toggle Connected Papers Toggle Litmaps Toggle scite.ai Toggle alphaXiv Toggle Links to Code Toggle DagsHub Toggle GotitPub Toggle Huggingface Toggle Links to Code Toggle ScienceCast Toggle Replicate Toggle Spaces Toggle Spaces Toggle Core recommender toggle IArxiv recommender toggle" "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,sun2024soar,\cite{sun2024soar},SOAR: Improved Indexing for Approximate Nearest Neighbor Search,http://arxiv.org/abs/2404.00774v1,"This paper introduces SOAR: Spilling with Orthogonality-Amplified Residuals, a novel data indexing technique for approximate nearest neighbor (ANN) search. SOAR extends upon previous approaches to ANN search, such as spill trees, that utilize multiple redundant representations while partitioning the data to reduce the probability of missing a nearest neighbor during search. Rather than training and computing these redundant representations independently, however, SOAR uses an orthogonality-amplified residual loss, which optimizes each representation to compensate for cases where other representations perform poorly. This drastically improves the overall index quality, resulting in state-of-the-art ANN benchmark performance while maintaining fast indexing times and low memory consumption.",True,True,"Sun, Philip and Simcha, David and Dopson, Dave and Guo, Ruiqi and Kumar, Sanjiv",2023.0,,,,,SOAR: Improved Indexing for Approximate Nearest Neighbor Search,SOAR: Improved Indexing for Approximate Nearest Neighbor Search,http://arxiv.org/pdf/2404.00774v1,"This paper introduces SOAR: Spilling with Orthogonality-Amplified Residuals, a novel data indexing technique for approximate nearest neighbor (ANN) search. SOAR extends upon previous approaches to ANN search, such as spill trees, that utilize multiple redundant representations while partitioning the data to reduce the probability of missing a nearest neighbor during search. Rather than training and computing these redundant representations independently, however, SOAR uses an orthogonality-amplified residual loss, which optimizes each representation to compensate for cases where other representations perform poorly. This drastically improves the overall index quality, resulting in state-of-the-art ANN benchmark performance while maintaining fast indexing times and low memory consumption." "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,morozov2018non,\cite{morozov2018non},Non-metric similarity graphs for maximum inner product search,,,True,False,"Morozov, Stanislav and Babenko, Artem",2018.0,,,,,Non-metric similarity graphs for maximum inner product search,Reviews: Non-metric Similarity Graphs for Maximum Inner ...,https://proceedings.neurips.cc/paper/2018/file/229754d7799160502a143a72f6789927-Reviews.html,This paper addresses the Maximum Inner Product Search (MIPS) problem by using the popular Approximate Nearest Neighbor Search (ANN Search) technique: Navigable "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,liu2020understanding,\cite{liu2020understanding},"Understanding and Improving Proximity Graph based Maximum Inner Product Search",http://arxiv.org/abs/1909.13459v2,"The inner-product navigable small world graph (ip-NSW) represents the state-of-the-art method for approximate maximum inner product search (MIPS) and it can achieve an order of magnitude speedup over the fastest baseline. However, to date it is still unclear where its exceptional performance comes from. In this paper, we show that there is a strong norm bias in the MIPS problem, which means that the large norm items are very likely to become the result of MIPS. Then we explain the good performance of ip-NSW as matching the norm bias of the MIPS problem - large norm items have big in-degrees in the ip-NSW proximity graph and a walk on the graph spends the majority of computation on these items, thus effectively avoids unnecessary computation on small norm items. Furthermore, we propose the ip-NSW+ algorithm, which improves ip-NSW by introducing an additional angular proximity graph. Search is first conducted on the angular graph to find the angular neighbors of a query and then the MIPS neighbors of these angular neighbors are used to initialize the candidate pool for search on the inner-product proximity graph. Experiment results show that ip-NSW+ consistently and significantly outperforms ip-NSW and provides more robust performance under different data distributions.",True,True,"Liu, Jie and Yan, Xiao and Dai, Xinyan and Li, Zhirong and Cheng, James and Yang, Ming-Chang",2020.0,,,,,"Understanding and Improving Proximity Graph based Maximum Inner Product Search",Understanding and Improving Proximity Graph based ...,https://arxiv.org/abs/1909.13459,by J Liu · 2019 · Cited by 39 — The inner-product navigable small world graph (ip-NSW) represents the state-of-the-art method for approximate maximum inner product search (MIPS) "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,zhou2019mobius,\cite{zhou2019mobius},"M{\""o}bius transformation for fast inner product search on graph",,,True,False,"Zhou, Zhixin and Tan, Shulong and Xu, Zhaozhuo and Li, Ping",2019.0,,,,,"M{\""o}bius transformation for fast inner product search on graph",Möbius transformation for fast inner product search on graph,https://dl.acm.org/doi/10.5555/3454287.3455025,by Z Zhou · 2019 · Cited by 68 — Our proposed method is based on the property that Möbius transformation introduces an isomorphism between a subgraph of ℓ2-Delaunay graph and Delaunay graph for "Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search",2504.14861v1,tan2021norm,\cite{tan2021norm},Norm adjusted proximity graph for fast inner product retrieval,,,True,False,"Tan, Shulong and Xu, Zhaozhuo and Zhao, Weijie and Fei, Hongliang and Zhou, Zhixin and Li, Ping",2021.0,,,,,Norm adjusted proximity graph for fast inner product retrieval,Norm Adjusted Proximity Graph for Fast Inner Product Retrieval,https://oa.mg/work/10.1145/3447548.3467412,“Norm Adjusted Proximity Graph for Fast Inner Product Retrieval” is a paper by Shulong Tan Zhaozhuo Xu Weijie Zhao Hongliang Fei Zhixin Zhou Ping Li published AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Terry1994a,\cite{Terry1994a},Session Guarantees for Weakly Consistent Replicated Data,,,True,False,"Terry, D.B. and Demers, A.J. and Petersen, K. and Spreitzer, M.J. and Theimer, M.M. and Welch, B.B.",1994.0,,https://ieeexplore.ieee.org/document/331722,10.1109/PDIS.1994.331722,,Session Guarantees for Weakly Consistent Replicated Data,Session Guarantees for Weakly Consistent Replicated Data,https://www.cs.cornell.edu/courses/cs734/2000FA/cached%20papers/SessionGuaranteesPDIS_1.html,"Four per-session guarantees are proposed to aid users and applications of weakly consistent replicated data: Read Your Writes, Monotonic Reads, Writes Follow" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Berenson1995,\cite{Berenson1995},A Critique of ANSI SQL Isolation Levels,http://arxiv.org/abs/cs/0701157v1,"ANSI SQL-92 defines Isolation Levels in terms of phenomena: Dirty Reads, Non-Repeatable Reads, and Phantoms. This paper shows that these phenomena and the ANSI SQL definitions fail to characterize several popular isolation levels, including the standard locking implementations of the levels. Investigating the ambiguities of the phenomena leads to clearer definitions; in addition new phenomena that better characterize isolation types are introduced. An important multiversion isolation type, Snapshot Isolation, is defined.",True,True,"Berenson, Hal and Bernstein, Phil and Gray, Jim and Melton, Jim and O'Neil, Elizabeth and O'Neil, Patrick",1995.0,,,10.1145/568271.223785,SIGMOD Rec.,A Critique of ANSI SQL Isolation Levels,A Critique of ANSI SQL Isolation Levels,http://arxiv.org/pdf/cs/0701157v1,"ANSI SQL-92 defines Isolation Levels in terms of phenomena: Dirty Reads, Non-Repeatable Reads, and Phantoms. This paper shows that these phenomena and the ANSI SQL definitions fail to characterize several popular isolation levels, including the standard locking implementations of the levels. Investigating the ambiguities of the phenomena leads to clearer definitions; in addition new phenomena that better characterize isolation types are introduced. An important multiversion isolation type, Snapshot Isolation, is defined." AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Adya2000,\cite{Adya2000},Generalized Isolation Level Definitions,,,True,False,"Adya, A. and Liskov, B. and O'Neil, P.",2000.0,,,10.1109/ICDE.2000.839388,,Generalized Isolation Level Definitions,Reviews for Paper 8-Generalized Isolation Level Definitions,https://web.eecs.umich.edu/~mozafari/fall2018/eecs584/reviews/summaries/summary8.html,"The author proposes the generalized isolation level definitions which are precise and implementation-independent (locking, optimism, serialization). It adopts" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Crooks2017,\cite{Crooks2017},Seeing Is {{Believing}}: {{A Client-Centric Specification}} of {{Database Isolation}},,,True,False,"Crooks, Natacha and Pu, Youer and Alvisi, Lorenzo and Clement, Allen",2017.0,,,10.1145/3087801.3087802,,Seeing Is {{Believing}}: {{A Client-Centric Specification}} of {{Database Isolation}},[PDF] A Client-Centric Specification of Database Isolation,https://www.cs.cornell.edu/lorenzo/papers/Crooks17Seeing.pdf,Seeing is Believing: A Client-Centric Specification of Database. Isolation. Natacha Crooks. The University of Texas at Austin and Cornell University. Youer Pu. AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Burckhardt2014,\cite{Burckhardt2014},"Replicated data types: specification, verification, optimality",,,True,False,"Burckhardt, Sebastian and Gotsman, Alexey and Yang, Hongseok and Zawirski, Marek",2014.0,,https://doi.org/10.1145/2535838.2535848,10.1145/2535838.2535848,,"Replicated data types: specification, verification, optimality","Replicated data types: specification, verification, optimality",https://dl.acm.org/doi/10.1145/2578855.2535848,We propose a framework for specifying replicated data types using relations over events and verifying their implementations using replication-aware simulations. AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Cerone2015,\cite{Cerone2015},A {{Framework}} for {{Transactional Consistency Models}} with {{Atomic Visibility}},,,True,False,"Cerone, Andrea and Bernardi, Giovanni and Gotsman, Alexey",2015.0,,,10.4230/LIPIcs.CONCUR.2015.58,,A {{Framework}} for {{Transactional Consistency Models}} with {{Atomic Visibility}},A Framework for Transactional Consistency Models with ...,https://drops.dagstuhl.de/storage/00lipics/lipics-vol042-concur2015/LIPIcs.CONCUR.2015.58/LIPIcs.CONCUR.2015.58.pdf,by A Cerone · 2015 · Cited by 134 — A Framework for Transactional Consistency Models with Atomic Visibility. Our work systematises the knowledge about consistency models of replicated databases. AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Biswas2019,\cite{Biswas2019},On the Complexity of Checking Transactional Consistency,http://arxiv.org/abs/1908.04509v1,"Transactions simplify concurrent programming by enabling computations on shared data that are isolated from other concurrent computations and are resilient to failures. Modern databases provide different consistency models for transactions corresponding to different tradeoffs between consistency and availability. In this work, we investigate the problem of checking whether a given execution of a transactional database adheres to some consistency model. We show that consistency models like read committed, read atomic, and causal consistency are polynomial time checkable while prefix consistency and snapshot isolation are NP-complete in general. These results complement a previous NP-completeness result concerning serializability. Moreover, in the context of NP-complete consistency models, we devise algorithms that are polynomial time assuming that certain parameters in the input executions, e.g., the number of sessions, are fixed. We evaluate the scalability of these algorithms in the context of several production databases.",True,True,"Biswas, Ranadeep and Enea, Constantin",2019.0,,,10.1145/3360591,Proceedings of the ACM on Programming Languages,On the Complexity of Checking Transactional Consistency,On the Complexity of Checking Transactional Consistency,http://arxiv.org/pdf/1908.04509v1,"Transactions simplify concurrent programming by enabling computations on shared data that are isolated from other concurrent computations and are resilient to failures. Modern databases provide different consistency models for transactions corresponding to different tradeoffs between consistency and availability. In this work, we investigate the problem of checking whether a given execution of a transactional database adheres to some consistency model. We show that consistency models like read committed, read atomic, and causal consistency are polynomial time checkable while prefix consistency and snapshot isolation are NP-complete in general. These results complement a previous NP-completeness result concerning serializability. Moreover, in the context of NP-complete consistency models, we devise algorithms that are polynomial time assuming that certain parameters in the input executions, e.g., the number of sessions, are fixed. We evaluate the scalability of these algorithms in the context of several production databases." AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Liu2024a,\cite{Liu2024a},"Plume: Efficient and Complete Black-Box Checking of Weak Isolation Levels",,,True,False,"Si Liu and Long Gu and Hengfeng Wei and David A. Basin",2024.0,,https://doi.org/10.1145/3689742,10.1145/3689742,Proc. {ACM} Program. Lang.,"Plume: Efficient and Complete Black-Box Checking of Weak Isolation Levels",Efficient and Complete Black-box Checking of Weak Isolation ...,https://2024.splashcon.org/details/splash-2024-oopsla/85/Plume-Efficient-and-Complete-Black-box-Checking-of-Weak-Isolation-Levels,"In this paper we present Plume, the first efficient, complete, black-box checker for weak isolation levels. Plume builds on modular, fine-grained, transactional" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Tan2020,\cite{Tan2020},Cobra: Making Transactional Key-Value Stores Verifiably Serializable,,,True,False,"Cheng Tan and Changgeng Zhao and Shuai Mu and Michael Walfish",2020.0,,https://www.usenix.org/conference/osdi20/presentation/tan,,,Cobra: Making Transactional Key-Value Stores Verifiably Serializable,Making transactional key-value stores verifiably serializable,https://dl.acm.org/doi/abs/10.5555/3488766.3488770,"by C Tan · 2020 · Cited by 61 — COBRA tames that problem by starting with a suitable SMT solver. COBRA then introduces several new techniques, including a new encoding of the" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Geng2024,\cite{Geng2024},"IsoPredict: Dynamic Predictive Analysis for Detecting Unserializable Behaviors in Weakly Isolated Data Store Applications",http://arxiv.org/abs/2404.04621v1,"This paper presents the first dynamic predictive analysis for data store applications under weak isolation levels, called Isopredict. Given an observed serializable execution of a data store application, Isopredict generates and solves SMT constraints to find an unserializable execution that is a feasible execution of the application. Isopredict introduces novel techniques that handle divergent application behavior; solve mutually recursive sets of constraints; and balance coverage, precision, and performance. An evaluation on four transactional data store benchmarks shows that Isopredict often predicts unserializable behaviors, 99% of which are feasible.",True,True,"Geng, Chujun and Blanas, Spyros and Bond, Michael D. and Wang, Yang",2024.0,,,10.1145/3656391,Reproduction Package for 'IsoPredict: Dynamic Predictive Analysis for Detecting Unserializable Behaviors in Weakly Isolated Data Store Applications',"IsoPredict: Dynamic Predictive Analysis for Detecting Unserializable Behaviors in Weakly Isolated Data Store Applications",Chujun Geng - - researchr.org,https://conf.researchr.org/profile/conf/chujungeng,Author of IsoPredict: Dynamic Predictive Analysis for Detecting Unserializable Behaviors in Weakly Isolated Data Store Applications within the PLDI Research AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Zhang2023a,\cite{Zhang2023a},Viper: {{A Fast Snapshot Isolation Checker}},,,True,False,"Zhang, Jian and Ji, Ye and Mu, Shuai and Tan, Cheng",2023.0,,,10.1145/3552326.3567492,,Viper: {{A Fast Snapshot Isolation Checker}},Viper: A Fast Snapshot Isolation Checker - ACM Digital Library,https://dl.acm.org/doi/10.1145/3552326.3567492,"We present viper, an SI checker that is sound, complete, and fast. Viper checks black-box databases and hence is transparent to both users and databases." AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Huang2023b,\cite{Huang2023b},Efficient Black-box Checking of Snapshot Isolation in Databases,http://arxiv.org/abs/2301.07313v2,"Snapshot isolation (SI) is a prevalent weak isolation level that avoids the performance penalty imposed by serializability and simultaneously prevents various undesired data anomalies. Nevertheless, SI anomalies have recently been found in production cloud databases that claim to provide the SI guarantee. Given the complex and often unavailable internals of such databases, a black-box SI checker is highly desirable. In this paper we present PolySI, a novel black-box checker that efficiently checks SI and provides understandable counterexamples upon detecting violations. PolySI builds on a novel characterization of SI using generalized polygraphs (GPs), for which we establish its soundness and completeness. PolySI employs an SMT solver and also accelerates SMT solving by utilizing the compact constraint encoding of GPs and domain-specific optimizations for pruning constraints. As demonstrated by our extensive assessment, PolySI successfully reproduces all of 2477 known SI anomalies, detects novel SI violations in three production cloud databases, identifies their causes, outperforms the state-of-the-art black-box checkers under a wide range of workloads, and can scale up to large-sized workloads.",True,True,"Huang, Kaile and Liu, Si and Chen, Zhenge and Wei, Hengfeng and Basin, David and Li, Haixiang and Pan, Anqun",2023.0,,,10.14778/3583140.3583145,Proc. VLDB Endow.,Efficient Black-box Checking of Snapshot Isolation in Databases,Efficient Black-Box Checking of Snapshot Isolation in Databases,https://dl.acm.org/doi/abs/10.14778/3583140.3583145,"by K Huang · 2023 · Cited by 19 — In this paper we present PolySI, a black-box checker that efficiently checks SI and provides understandable counterexamples upon detecting violations. PolySI" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Papadimitriou1979a,\cite{Papadimitriou1979a},The Serializability of Concurrent Database Updates,,,True,False,"Papadimitriou, Christos H.",1979.0,,https://dl.acm.org/doi/10.1145/322154.322158,10.1145/322154.322158,Journal of the ACM,The Serializability of Concurrent Database Updates,MIT/LCS/TR-210 - Serializability of - CSAIL Publications,https://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-210.pdf,The Serializability of Concurrent Database Updates* by. Christos H. Papadimitriou. Massachusetts Institute of Technology. Abstract. A sequence of interleaved AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Furbach2015,\cite{Furbach2015},Memory-Model-Aware Testing: A Unified Complexity Analysis,,,True,False,"Furbach, Florian and Meyer, Roland and Schneider, Klaus and Senftleben, Maximilian",2015.0,,https://doi.org/10.1145/2753761,10.1145/2753761,ACM Trans. Embed. Comput. Syst.,Memory-Model-Aware Testing: A Unified Complexity Analysis,Memory-Model-Aware Testing: A Unified Complexity Analysis,https://dl.acm.org/doi/10.1145/2753761,"We determine the complexity of the testing problem for most of the known memory models. Moreover, we study the impact on the complexity of parameters, such as" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Gibbons1997,\cite{Gibbons1997},Testing {{Shared Memories}},,,True,False,"Gibbons, Phillip B. and Korach, Ephraim",1997.0,,http://epubs.siam.org/doi/10.1137/S0097539794279614,10.1137/S0097539794279614,SIAM Journal on Computing,Testing {{Shared Memories}},Testing Shared Memories | SIAM Journal on Computing,https://epubs.siam.org/doi/10.1137/S0097539794279614,"A series of results are presented for testing an execution of a shared memory under various scenarios, comparing sequential consistency with linearizability," AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Gibbons1994,\cite{Gibbons1994},On testing cache-coherent shared memories,,,True,False,"Gibbons, Phillip B and Korach, Ephraim",1994.0,,,,,On testing cache-coherent shared memories,On testing cache-coherent shared memories - ACM Digital Library,https://dl.acm.org/doi/pdf/10.1145/181014.181328,We present a series of re- sults for testing an execution of a shared memory under scenarios that exploit the cache-coherence protocol. In ad- dition to reads AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Abdulla2019b,\cite{Abdulla2019b},{Optimal stateless model checking for reads-from equivalence under sequential consistency},,,True,False,Parosh Aziz Abdulla and Mohamed Faouzi Atig and Bengt Jonsson and Magnus L{\aa}ng and Tuan Phong Ngo and Konstantinos Sagonas,,,,10.1145/3360576,Proc. {ACM} Program. Lang.,{Optimal stateless model checking for reads-from equivalence under sequential consistency},Optimal stateless model checking for reads-from equivalence ...,https://dl.acm.org/doi/10.1145/3360576,We present a new approach for stateless model checking (SMC) of multithreaded programs under Sequential Consistency (SC) semantics. AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Chalupa2018,\cite{Chalupa2018},Data-centric Dynamic Partial Order Reduction,http://arxiv.org/abs/1610.01188v6,"We present a new dynamic partial-order reduction method for stateless model checking of concurrent programs. A common approach for exploring program behaviors relies on enumerating the traces of the program, without storing the visited states (aka stateless exploration). As the number of distinct traces grows exponentially, dynamic partial-order reduction (DPOR) techniques have been successfully used to partition the space of traces into equivalence classes (Mazurkiewicz partitioning), with the goal of exploring only few representative traces from each class. We introduce a new equivalence on traces under sequential consistency semantics, which we call the observation equivalence. Two traces are observationally equivalent if every read event observes the same write event in both traces. While the traditional Mazurkiewicz equivalence is control-centric, our new definition is data-centric. We show that our observation equivalence is coarser than the Mazurkiewicz equivalence, and in many cases even exponentially coarser. We devise a DPOR exploration of the trace space, called data-centric DPOR, based on the observation equivalence. For acyclic architectures, our algorithm is guaranteed to explore exactly one representative trace from each observation class, while spending polynomial time per class. Hence, our algorithm is optimal wrt the observation equivalence, and in several cases explores exponentially fewer traces than any enumerative method based on the Mazurkiewicz equivalence. For cyclic architectures, we consider an equivalence between traces which is finer than the observation equivalence; but coarser than the Mazurkiewicz equivalence, and in some cases is exponentially coarser. Our data-centric DPOR algorithm remains optimal under this trace equivalence.",True,True,"Chalupa, Marek and Chatterjee, Krishnendu and Pavlogiannis, Andreas and Sinha, Nishant and Vaidya, Kapil",2018.0,,,10.1145/3158119,Proceedings of the ACM on Programming Languages,Data-centric Dynamic Partial Order Reduction,[1610.01188] Data-centric Dynamic Partial Order Reduction - arXiv,https://arxiv.org/abs/1610.01188,Abstract:We present a new dynamic partial-order reduction method for stateless model checking of concurrent programs. AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Mathur2020,\cite{Mathur2020},The Complexity of Dynamic Data Race Prediction,http://arxiv.org/abs/2004.14931v2,"Writing concurrent programs is notoriously hard due to scheduling non-determinism. The most common concurrency bugs are data races, which are accesses to a shared resource that can be executed concurrently. Dynamic data-race prediction is the most standard technique for detecting data races: given an observed, data-race-free trace $t$, the task is to determine whether $t$ can be reordered to a trace $t^*$ that exposes a data-race. Although the problem has received significant practical attention for over three decades, its complexity has remained elusive. In this work, we address this lacuna, identifying sources of intractability and conditions under which the problem is efficiently solvable. Given a trace $t$ of size $n$ over $k$ threads, our main results are as follows. First, we establish a general $O(k\cdot n^{2\cdot (k-1)})$ upper-bound, as well as an $O(n^k)$ upper-bound when certain parameters of $t$ are constant. In addition, we show that the problem is NP-hard and even W[1]-hard parameterized by $k$, and thus unlikely to be fixed-parameter tractable. Second, we study the problem over acyclic communication topologies, such as server-clients hierarchies. We establish an $O(k^2\cdot d\cdot n^2\cdot \log n)$ upper-bound, where $d$ is the number of shared variables accessed in $t$. In addition, we show that even for traces with $k=2$ threads, the problem has no $O(n^{2-\epsilon})$ algorithm under Orthogonal Vectors. Since any trace with 2 threads defines an acyclic topology, our upper-bound for this case is optimal wrt polynomial improvements for up to moderate values of $k$ and $d$. Finally, we study a distance-bounded version of the problem, where the task is to expose a data race by a witness trace that is similar to $t$. We develop an algorithm that works in $O(n)$ time when certain parameters of $t$ are constant.",True,True,"Mathur, Umang and Pavlogiannis, Andreas and Viswanathan, Mahesh",2020.0,,https://dl.acm.org/doi/10.1145/3373718.3394783,10.1145/3373718.3394783,,The Complexity of Dynamic Data Race Prediction,The Complexity of Dynamic Data Race Prediction,http://arxiv.org/pdf/2004.14931v2,"Writing concurrent programs is notoriously hard due to scheduling non-determinism. The most common concurrency bugs are data races, which are accesses to a shared resource that can be executed concurrently. Dynamic data-race prediction is the most standard technique for detecting data races: given an observed, data-race-free trace $t$, the task is to determine whether $t$ can be reordered to a trace $t^*$ that exposes a data-race. Although the problem has received significant practical attention for over three decades, its complexity has remained elusive. In this work, we address this lacuna, identifying sources of intractability and conditions under which the problem is efficiently solvable. Given a trace $t$ of size $n$ over $k$ threads, our main results are as follows. First, we establish a general $O(k\cdot n^{2\cdot (k-1)})$ upper-bound, as well as an $O(n^k)$ upper-bound when certain parameters of $t$ are constant. In addition, we show that the problem is NP-hard and even W[1]-hard parameterized by $k$, and thus unlikely to be fixed-parameter tractable. Second, we study the problem over acyclic communication topologies, such as server-clients hierarchies. We establish an $O(k^2\cdot d\cdot n^2\cdot \log n)$ upper-bound, where $d$ is the number of shared variables accessed in $t$. In addition, we show that even for traces with $k=2$ threads, the problem has no $O(n^{2-\epsilon})$ algorithm under Orthogonal Vectors. Since any trace with 2 threads defines an acyclic topology, our upper-bound for this case is optimal wrt polynomial improvements for up to moderate values of $k$ and $d$. Finally, we study a distance-bounded version of the problem, where the task is to expose a data race by a witness trace that is similar to $t$. We develop an algorithm that works in $O(n)$ time when certain parameters of $t$ are constant." AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Bui2021,\cite{Bui2021},The Reads-From Equivalence for the TSO and PSO Memory Models,http://arxiv.org/abs/2011.11763v3,"The verification of concurrent programs remains an open challenge due to the non-determinism in inter-process communication. One algorithmic problem in this challenge is the consistency verification of concurrent executions. Consistency verification under a reads-from map allows to compute the reads-from (RF) equivalence between concurrent traces, with direct applications to areas such as Stateless Model Checking (SMC). The RF equivalence was recently shown to be coarser than the standard Mazurkiewicz equivalence, leading to impressive scalability improvements for SMC under SC (sequential consistency). However, for the relaxed memory models of TSO and PSO (total/partial store order), the algorithmic problem of deciding the RF equivalence, as well as its impact on SMC, has been elusive. In this work we solve the problem of consistency verification for the TSO and PSO memory models given a reads-from map, denoted VTSO-rf and VPSO-rf, respectively. For an execution of $n$ events over $k$ threads and $d$ variables, we establish novel bounds that scale as $n^{k+1}$ for TSO and as $n^{k+1}\cdot \min(n^{k^2}, 2^{k\cdot d})$ for PSO. Based on our solution to these problems, we develop an SMC algorithm under TSO and PSO that uses the RF equivalence. The algorithm is exploration-optimal, in the sense that it is guaranteed to explore each class of the RF partitioning exactly once, and spends polynomial time per class when $k$ is bounded. We implement all our algorithms in the SMC tool Nidhugg, and perform a large number of experiments over benchmarks from existing literature. Our experimental results show that our algorithms for VTSO-rf and VPSO-rf provide significant scalability improvements over standard alternatives. When used for SMC, the RF partitioning is often much coarser than the standard Shasha-Snir partitioning for TSO/PSO, which yields a significant speedup in the model checking task.",True,True,"Bui, Truc Lam and Chatterjee, Krishnendu and Gautam, Tushar and Pavlogiannis, Andreas and Toman, Viktor",2021.0,,https://dl.acm.org/doi/10.1145/3485541,10.1145/3485541,Proceedings of the ACM on Programming Languages,The Reads-From Equivalence for the TSO and PSO Memory Models,The reads-from equivalence for the TSO and PSO memory ...,https://dl.acm.org/doi/10.1145/3485541,"In this work we solve the algorithmic problem of consistency verification for the TSO and PSO memory models given a reads-from map, denoted VTSO-rf and VPSO-rf" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Baty2011,\cite{Baty2011},Mathematizing C++ concurrency,,,True,False,"Batty, Mark and Owens, Scott and Sarkar, Susmit and Sewell, Peter and Weber, Tjark",2011.0,,https://doi.org/10.1145/1926385.1926394,10.1145/1926385.1926394,,Mathematizing C++ concurrency,[PDF] Mathematizing C++ Concurrency - University of Cambridge,https://www.cl.cam.ac.uk/~pes20/cpp/popl085ap-sewell.pdf,"Here we describe C++ concurrency incrementally, starting with single-threaded programs and then adding threads and locks, SC atomics, and low-level atomics (" AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Lahav2015,\cite{Lahav2015},Owicki-{{Gries Reasoning}} for {{Weak Memory Models}},,,True,False,"Lahav, Ori and Vafeiadis, Viktor",2015.0,,https://link.springer.com/10.1007/978-3-662-47666-6_25,10.1007/978-3-662-47666-6_25,,Owicki-{{Gries Reasoning}} for {{Weak Memory Models}},Owicki-Gries Reasoning for Weak Memory Models,https://plv.mpi-sws.org/ogra/,"We show that even in the absence of auxiliary variables, the well-known Owicki-Gries method for verifying concurrent programs is unsound for weak memory models." AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Bouajjani2017a,\cite{Bouajjani2017a},On Verifying Causal Consistency,http://arxiv.org/abs/1611.00580v2,"Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NP-complete. This holds even for the read-write memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in key-value stores, which are instances of the read-write memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the read-write memory abstraction. However, we show that for the read-write memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption.",True,True,"Bouajjani, Ahmed and Enea, Constantin and Guerraoui, Rachid and Hamza, Jad",2017.0,,https://dl.acm.org/doi/10.1145/3093333.3009888,10.1145/3093333.3009888,SIGPLAN Not.,On Verifying Causal Consistency,On Verifying Causal Consistency,http://arxiv.org/pdf/1611.00580v2,"Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NP-complete. This holds even for the read-write memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in key-value stores, which are instances of the read-write memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the read-write memory abstraction. However, we show that for the read-write memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption." AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Chakraborty2024a,\cite{Chakraborty2024a},How Hard is Weak-Memory Testing?,http://arxiv.org/abs/2311.04302v2,"Weak-memory models are standard formal specifications of concurrency across hardware, programming languages, and distributed systems. A fundamental computational problem is consistency testing: is the observed execution of a concurrent program in alignment with the specification of the underlying system? The problem has been studied extensively across Sequential Consistency (SC) and weak memory, and proven to be NP-complete when some aspect of the input (e.g., number of threads/memory locations) is unbounded. This unboundedness has left a natural question open: are there efficient parameterized algorithms for testing? The main contribution of this paper is a deep hardness result for consistency testing under many popular weak-memory models: the problem remains NP-complete even in its bounded setting, where candidate executions contain a bounded number of threads, memory locations, and values. This hardness spreads across several Release-Acquire variants of C11, a popular variant of its Relaxed fragment, popular Causal Consistency models, and the POWER architecture. To our knowledge, this is the first result that fully exposes the hardness of weak-memory testing and proves that the problem admits no parameterization under standard input parameters. It also yields a computational separation of these models from SC, x86-TSO, PSO, and Relaxed, for which bounded consistency testing is either known (for SC), or shown here (for the rest), to be in polynomial time.",True,True,"Chakraborty, Soham and Krishna, Shankara Narayanan and Mathur, Umang and Pavlogiannis, Andreas",2024.0,,https://dl.acm.org/doi/10.1145/3632908,10.1145/3632908,Proceedings of the ACM on Programming Languages,How Hard is Weak-Memory Testing?,[2311.04302] How Hard is Weak-Memory Testing? - arXiv,https://arxiv.org/abs/2311.04302,The main contribution of this paper is a deep hardness result for consistency testing under many popular weak-memory models. AWDIT: An Optimal Weak Database Isolation Tester,2504.06975v1,Tunc2023,\cite{Tunc2023},Optimal Reads-From Consistency Checking for C11-Style Memory Models,http://arxiv.org/abs/2304.03714v2,"Over the years, several memory models have been proposed to capture the subtle concurrency semantics of C/C++.One of the most fundamental problems associated with a memory model M is consistency checking: given an execution X, is X consistent with M? This problem lies at the heart of numerous applications, including specification testing and litmus tests, stateless model checking, and dynamic analyses. As such, it has been explored extensively and its complexity is well-understood for traditional models like SC and TSO. However, less is known for the numerous model variants of C/C++, for which the problem becomes challenging due to the intricacies of their concurrency primitives. In this work we study the problem of consistency checking for popular variants of the C11 memory model, in particular, the RC20 model, its release-acquire (RA) fragment, the strong and weak variants of RA (SRA and WRA), as well as the Relaxed fragment of RC20. Motivated by applications in testing and model checking, we focus on reads-from consistency checking. The input is an execution X specifying a set of events, their program order and their reads-from relation, and the task is to decide the existence of a modification order on the writes of X that makes X consistent in a memory model. We draw a rich complexity landscape for this problem; our results include (i)~nearly-linear-time algorithms for certain variants, which improve over prior results, (ii)~fine-grained optimality results, as well as (iii)~matching upper and lower bounds (NP-hardness) for other variants. To our knowledge, this is the first work to characterize the complexity of consistency checking for C11 memory models. We have implemented our algorithms inside the TruSt model checker and the C11Tester testing tool. Experiments on standard benchmarks show that our new algorithms improve consistency checking, often by a significant margin.",True,True,"Tun{\c c}, H{\""u}nkar Can and Abdulla, Parosh Aziz and Chakraborty, Soham and Krishna, Shankaranarayanan and Mathur, Umang and Pavlogiannis, Andreas",2023.0,,https://dl.acm.org/doi/10.1145/3591251,10.1145/3591251,Proceedings of the ACM on Programming Languages,Optimal Reads-From Consistency Checking for C11-Style Memory Models,[PDF] Optimal Reads-From Consistency Checking for C11-Style Memory ...,https://www.comp.nus.edu.sg/~umathur/papers/rc20-rf-consistency-pldi23.pdf,"In this work we study the problem of consistency checking for popular variants of the C11 memory model, in particular, the RC20 model, its release-acquire. (RA)" SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,kathail2024leveraging,\cite{kathail2024leveraging},"Leveraging genomic deep learning models for non-coding variant effect prediction",http://arxiv.org/abs/2411.11158v1,"The majority of genetic variants identified in genome-wide association studies of complex traits are non-coding, and characterizing their function remains an important challenge in human genetics. Genomic deep learning models have emerged as a promising approach to enable in silico prediction of variant effects. These include supervised sequence-to-activity models, which predict genome-wide chromatin states or gene expression levels directly from DNA sequence, and self-supervised genomic language models. Here, we review progress in leveraging these models for non-coding variant effect prediction. We describe practical considerations for making such predictions and categorize the types of ground truth data that have been used to evaluate deep learning-based variant effect predictions, providing insight into the settings in which current models are most useful. We also discuss downstream applications of such models to understanding disease-relevant non-coding variants. Our review highlights key considerations for practitioners and opportunities for future improvements in model development and evaluation.",True,True,"Kathail, Pooja and Bajwa, Ayesha and Ioannidis, Nilah M",2024.0,,,,arXiv preprint arXiv:2411.11158,"Leveraging genomic deep learning models for non-coding variant effect prediction",Leveraging genomic deep learning models for non-coding ...,https://arxiv.org/abs/2411.11158,"by P Kathail · 2024 · Cited by 4 — Here, we review progress in leveraging these models for non-coding variant effect prediction. We describe practical considerations for making such predictions." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,zhou2015predicting,\cite{zhou2015predicting},Predicting effects of noncoding variants with deep learning--based sequence model,,,True,False,"Zhou, Jian and Troyanskaya, Olga G",2015.0,,,,Nature methods,Predicting effects of noncoding variants with deep learning--based sequence model,Predicting effects of noncoding variants with deep learning-based ...,https://pubmed.ncbi.nlm.nih.gov/26301843/,"To predict the noncoding-variant effects de novo from sequence, we developed a deep learning-based algorithmic framework, DeepSEA (http://deepsea.princeton.edu/), that directly learns a regulatory sequence code from large-scale chromatin-profiling data, enabling prediction of chromatin effects of sequence alterations with single-nucleotide sensitivity. * Modeling 0.6 million genes for the rational design of functional _cis_-regulatory variants and de novo design of _cis-_ regulatory sequences.Li T, Xu H, Teng S, Suo M, Bahitwa R, Xu M, Qian Y, Ramstein GP, Song B, Buckler ES, Wang H.Li T, et al.Proc Natl Acad Sci U S A. * DeepFun: a deep learning sequence-based model to decipher non-coding variant effect in a tissue- and cell type-specific manner.Pei G, Hu R, Jia P, Zhao Z.Pei G, et al.Nucleic Acids Res. 2021 Jul 2;49(W1):W131-W139." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,kelley2018sequential,\cite{kelley2018sequential},Sequential regulatory activity prediction across chromosomes with convolutional neural networks,,,True,False,"Kelley, David R and Reshef, Yakir A and Bileschi, Maxwell and Belanger, David and McLean, Cory Y and Snoek, Jasper",2018.0,,,,Genome research,Sequential regulatory activity prediction across chromosomes with convolutional neural networks,Sequential regulatory activity prediction across chromosomes with ...,https://pubmed.ncbi.nlm.nih.gov/29588361/,"By use of convolutional neural networks, this system identifies promoters and distal regulatory elements and synthesizes their content to make effective gene expression predictions. (_A_) The _AKT2_ locus exemplifies the genome-wide accuracy of Basenji predictions; gene promoters and the strongest distal regulatory elements are easily identified, with some false-positive and -negative predictions for weaker elements. (_A_) We computed Pearson correlation between the log 2 prediction and experiment across all nonzero expressed test set genes for each CAGE data set. (_C_) For both the experimental measurement and Basenji prediction, the gene expression by CAGE data set matrix displays clusters. * Integrating distal and proximal information to predict gene expression via a densely connected convolutional neural network.Zeng W, Wang Y, Jiang R.Zeng W, et al.Bioinformatics." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,zhou2018deep,\cite{zhou2018deep},Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk,,,True,False,"Zhou, Jian and Theesfeld, Chandra L and Yao, Kevin and Chen, Kathleen M and Wong, Aaron K and Troyanskaya, Olga G",2018.0,,,,Nature genetics,Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk,Deep learning sequence-based ab initio prediction of variant effects ...,https://www.nature.com/articles/s41588-018-0160-6,"Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk | Nature Genetics Key challenges for human genetics, precision medicine and evolutionary biology include deciphering the regulatory code of gene expression and understanding the transcriptional effects of genome variation. *Nature* **464**, 768–772 (2010). B. Genetic effects on gene expression across human tissues. J.Z. and O.G.T. conceived and designed the study; J.Z. developed the computational methods and performed the analyses; C.L.T. designed and performed experimental studies; K.Y., K.M.C. and A.K.W. developed the ExPecto web server; J.Z., C.L.T. and O.G.T. wrote the manuscript. *et al.* Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk. B. Genetic effects on gene expression across human tissues." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,chen2022sequence,\cite{chen2022sequence},A sequence-based global map of regulatory activity for deciphering human genetics,,,True,False,"Chen, Kathleen M and Wong, Aaron K and Troyanskaya, Olga G and Zhou, Jian",2022.0,,,,Nature genetics,A sequence-based global map of regulatory activity for deciphering human genetics,A sequence-based global map of regulatory activity for deciphering ...,https://www.nature.com/articles/s41588-022-01102-2,"Sequence classes cover diverse types of regulatory activities, such as promoter or cell type-specific enhancer activity, across the whole genome by integrating sequence-based predictions from histone marks, TFs and chromatin accessibility across a wide range of cell types. Next, we applied the Sei model to develop a global, quantitative map from genomic sequences to specific classes of regulatory activities, which we termed sequence classes, by integrating the wide range of chromatin profiles predicted by Sei. Therefore, sequence classes were mapped directly from sequence, with each class representing a distinct program of regulatory activities across the tissues and cell types covered by the Sei model. Chen, K.M., Wong, A.K., Troyanskaya, O.G. _et al._ A sequence-based global map of regulatory activity for deciphering human genetics." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,enformer,\cite{enformer},Effective gene expression prediction from sequence by integrating long-range interactions,,,True,False,"Avsec, {\v{Z}}iga and Agarwal, Vikram and Visentin, Daniel and Ledsam, Joseph R and Grabska-Barwinska, Agnieszka and Taylor, Kyle R and Assael, Yannis and Jumper, John and Kohli, Pushmeet and Kelley, David R",2021.0,,,,Nature methods,Effective gene expression prediction from sequence by integrating long-range interactions,Effective gene expression prediction from sequence by ...,https://www.nature.com/articles/s41592-021-01252-x,"Here, we report substantially improved gene expression prediction accuracy from DNA sequences through the use of a deep learning architecture, called Enformer, that is able to integrate information from long-range interactions (up to 100 kb away) in the genome. We developed a new model architecture named Enformer (a portmanteau of enhancer and transformer) to predict gene expression and chromatin states in humans and mice from DNA sequences (Fig.1a and Extended Data Fig.1). Gene expression predictions also better captured tissue- or cell-type specificity (Fig.1b, right), including for closely related samples (Extended Data Fig.3). Enformer also yielded greater predictive accuracy than ExPecto1.""), a model trained to predict gene expression levels measured by RNA-seq, for both across-genes (0.850 versus 0.812 Spearman _r_) and across-tissues (0.451 versus 0.368 Spearman _r_) evaluation (Extended Data Fig.4)." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,NT,\cite{NT},Nucleotide Transformer: building and evaluating robust foundation models for human genomics,,,True,False,"Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza-Revilla, Javier and Lopez Carranza, Nicolas and Grzywaczewski, Adam Henryk and Oteri, Francesco and Dallago, Christian and Trop, Evan and de Almeida, Bernardo P and Sirelkhatim, Hassan and others",2024.0,,,,Nature Methods,Nucleotide Transformer: building and evaluating robust foundation models for human genomics,Nucleotide Transformer: building and evaluating robust foundation ...,https://www.nature.com/articles/s41592-024-02523-z,"Here, we present an extensive study of foundation models pre-trained on DNA sequences, named Nucleotide Transformer, ranging from 50 million up to 2.5 billion parameters and integrating information from 3,202 human genomes and 850 genomes from diverse species. Inspired by trends in NLP, where larger training datasets and model sizes have demonstrated improved performance27.""), we constructed transformer models with varying parameter sizes and datasets: (1) a 500-million-parameter model trained on sequences extracted from the human reference genome (‘Human ref 500M’); (2) a 500-million-parameter model (‘1000G 500M’) and (3) a 2.5-billion-parameter model (‘1000G 2.5B’) both trained on 3,202 genetically diverse human genomes28.""); and (4) a 2.5-billion-parameter model, encompassing 850 species from diverse phyla (‘Multispecies 2.5B’), including 11 model organisms (Fig. 1c and Supplementary Tables 1–4)." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,DNABert,\cite{DNABert},DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome,,,True,False,"Ji, Yanrong and Zhou, Zhihan and Liu, Han and Davuluri, Ramana V",2021.0,,,,Bioinformatics,DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome,DNABERT: pre-trained Bidirectional Encoder Representations from ...,https://pubmed.ncbi.nlm.nih.gov/33538820/,"## Save citation to file # DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome # DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome To address this challenge, we developed a novel pre-trained bidirectional encoder representation, named DNABERT, to capture global and transferrable understanding of genomic DNA sequences based on up and downstream nucleotide contexts. We anticipate that the pre-trained DNABERT model can be fined tuned to many other sequence analyses tasks. The source code, pretrained and finetuned model for DNABERT are available at GitHub ( PubMed Disclaimer DNABERT significantly outperforms other models in identifying promoter regions. DNABERT significantly outperforms other models in finding splice sites.( a ) (Left to…" SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,devlin2019bert,\cite{devlin2019bert},"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",http://arxiv.org/abs/1810.04805v2,"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",True,True,"Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina",2019.0,,,,,"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",[PDF] BERT: Pre-training of Deep Bidirectional Transformers for Language ...,https://aclanthology.org/N19-1423.pdf,"Unlike recent language repre-sentation models (Peters et al., 2018a; Rad-ford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a re-sult, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. More recently, sentence or document encoders which produce contextual token representations have been pre-trained from unlabeled text and fine-tuned for a supervised downstream task (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018)." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,celikkanatrevisiting,\cite{celikkanatrevisiting},Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning,,,True,False,"Celikkanat, Abdulkadir and Masegosa, Andres R and Nielsen, Thomas Dyhre",2024.0,,,,,Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning,Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning,http://arxiv.org/pdf/2411.02125v1,"Obtaining effective representations of DNA sequences is crucial for genome analysis. Metagenomic binning, for instance, relies on genome representations to cluster complex mixtures of DNA fragments from biological samples with the aim of determining their microbial compositions. In this paper, we revisit k-mer-based representations of genomes and provide a theoretical analysis of their use in representation learning. Based on the analysis, we propose a lightweight and scalable model for performing metagenomic binning at the genome read level, relying only on the k-mer compositions of the DNA fragments. We compare the model to recent genome foundation models and demonstrate that while the models are comparable in performance, the proposed model is significantly more effective in terms of scalability, a crucial aspect for performing metagenomic binning of real-world datasets." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,DNABert2,\cite{DNABert2},DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes,,,True,False,"Zhou, Zhihan and Ji, Yanrong and Li, Weijian and Dutta, Pratik and Davuluri, Ramana V and Liu, Han",2024.0,,,,,DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes,DNABERT-2: Efficient Foundation Model and Benchmark for Multi ...,https://github.com/MAGICS-LAB/DNABERT_2,"GitHub - MAGICS-LAB/DNABERT_2: [ICLR 2024] DNABERT-2: Efficient Foundation Model and Benchmark for Multi-Species Genome [ICLR 2024] DNABERT-2: Efficient Foundation Model and Benchmark for Multi-Species Genome DNABERT-2: Efficient Foundation Model and Benchmark for Multi-Species Genome DNABERT-2 is a foundation model trained on large-scale multi-species genome that achieves the state-of-the-art performance on $28$ tasks of the GUE benchmark. The pre-trained models is available at Huggingface as zhihan1996/DNABERT-2-117M. model = AutoModel.from_pretrained(""zhihan1996/DNABERT-2-117M"", trust_remote_code=True) model = AutoModel.from_pretrained(""zhihan1996/DNABERT-2-117M"", trust_remote_code=True, config=config) Or you can use the run_mlm.py at https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling by importing the BertModelForMaskedLM from https://huggingface.co/zhihan1996/DNABERT-2-117M/blob/main/bert_layers.py. title={DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome}, title = ""{DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome}"", [ICLR 2024] DNABERT-2: Efficient Foundation Model and Benchmark for Multi-Species Genome" SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,sanabria2024dna,\cite{sanabria2024dna},DNA language model GROVER learns sequence context in the human genome,,,True,False,"Sanabria, Melissa and Hirsch, Jonas and Joubert, Pierre M and Poetsch, Anna R",2024.0,,,,Nature Machine Intelligence,DNA language model GROVER learns sequence context in the human genome,DNA language model GROVER learns sequence context in ... - Nature,https://www.nature.com/articles/s42256-024-00872-0,"DNA language model GROVER learns sequence context in the human genome | Nature Machine Intelligence DNA language model GROVER learns sequence context in the human genome We established byte-pair encoding on the human genome and trained a foundation language model called GROVER (Genome Rules Obtained Via Extracted Representations) with the vocabulary selected via a custom task, next-_k_-mer prediction. R. The human genome’s vocabulary as proposed by the DNA language model GROVER - the code to the paper. Sanabria, M., Hirsch, J., Joubert, P.M. _et al._ DNA language model GROVER learns sequence context in the human genome. R. The human genome’s vocabulary as proposed by the DNA language model GROVER - the code to the paper." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,nguyen2024sequence,\cite{nguyen2024sequence},Sequence modeling and design from molecular to genome scale with Evo,,,True,False,"Nguyen, Eric and Poli, Michael and Durrant, Matthew G and Kang, Brian and Katrekar, Dhruva and Li, David B and Bartie, Liam J and Thomas, Armin W and King, Samuel H and Brixi, Garyk and others",2024.0,,,,Science,Sequence modeling and design from molecular to genome scale with Evo,Sequence modeling and design from molecular to genome ...,https://pubmed.ncbi.nlm.nih.gov/39541441/,"Sequence modeling and design from molecular to genome scale with Evo - PubMed Evo generalizes across DNA, RNA, and proteins, enabling zero-shot function prediction competitive with domain-specific language models and the generation of functional CRISPR-Cas and transposon systems, representing the first examples of protein-RNA and protein-DNA codesign with a language model. Evo also learns how small mutations affect whole-organism fitness and generates megabase-scale sequences with plausible genomic architecture. (A) A model of genome sequences at single-nucleotide resolution could learn all of the information encoded in regulatory DNA and in the sequences of the other modalities within the central dogma (proteins, coding RNA, and ncRNA). (B) Fine-tuning Evo on 8-kb-length genomic sequences containing CRISPR-Cas systems after its initial 8k pretraining phase." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,HyenaDNA,\cite{HyenaDNA},"HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution",http://arxiv.org/abs/2306.15794v2,"Genomic (DNA) sequences encode an enormous amount of information for gene regulation and protein synthesis. Similar to natural language models, researchers have proposed foundation models in genomics to learn generalizable features from unlabeled genome data that can then be fine-tuned for downstream tasks such as identifying regulatory elements. Due to the quadratic scaling of attention, previous Transformer-based genomic models have used 512 to 4k tokens as context (<0.001% of the human genome), significantly limiting the modeling of long-range interactions in DNA. In addition, these methods rely on tokenizers or fixed k-mers to aggregate meaningful DNA units, losing single nucleotide resolution where subtle genetic variations can completely alter protein function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large language model based on implicit convolutions was shown to match attention in quality while allowing longer context lengths and lower time complexity. Leveraging Hyena's new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level - an up to 500x increase over previous dense attention-based models. HyenaDNA scales sub-quadratically in sequence length (training up to 160x faster than Transformer), uses single nucleotide tokens, and has full global context at each layer. We explore what longer context enables - including the first use of in-context learning in genomics. On fine-tuned benchmarks from the Nucleotide Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 18 datasets using a model with orders of magnitude less parameters and pretraining data. On the GenomicBenchmarks, HyenaDNA surpasses SotA on 7 of 8 datasets on average by +10 accuracy points. Code at https://github.com/HazyResearch/hyena-dna.",True,True,"Nguyen, Eric and Poli, Michael and Faizi, Marjan and Thomas, Armin and Wornow, Michael and Birch-Sykes, Callum and Massaroli, Stefano and Patel, Aman and Rabideau, Clayton and Bengio, Yoshua and others",2024.0,,,,Advances in neural information processing systems,"HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution",HyenaDNA: Long-Range Genomic Sequence Modeling at Single ...,https://arxiv.org/abs/2306.15794,"View a PDF of the paper titled HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution, by Eric Nguyen and 12 other authors Leveraging Hyena's new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level - an up to 500x increase over previous dense attention-based models. View a PDF of the paper titled HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution, by Eric Nguyen and 12 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle " SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,MoE0,\cite{MoE0},Adaptive Mixtures of Local Experts,,,True,False,"Jacobs, Robert A. and Jordan, Michael I. and Nowlan, Steven J. and Hinton, Geoffrey E.",1991.0,,,10.1162/neco.1991.3.1.79,Neural Computation,Adaptive Mixtures of Local Experts,Adaptive Mixtures of Local Experts - Computer Science,https://www.cs.toronto.edu/~hinton/absps/jjnh91.pdf,"by RA Jacobs · Cited by 7088 — Each expert is a feed- forward network and all experts receive the same input and have the same number of outputs. The gating network is also feedforward, and" SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,SparseMoE,\cite{SparseMoE},"Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer",http://arxiv.org/abs/1701.06538v1,"The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.",True,True,"Shazeer, Noam and Mirhoseini, Azalia and Maziarz, Krzysztof and Davis, Andy and Le, Quoc and Hinton, Geoffrey and Dean, Jeff",2017.0,,,,arXiv preprint arXiv:1701.06538,"Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer",Outrageously Large Neural Networks: The Sparsely-Gated...,https://openreview.net/forum?id=B1ckMDqlg,"We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,fedus2022switch,\cite{fedus2022switch},"Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity",http://arxiv.org/abs/2101.03961v3,"In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the ""Colossal Clean Crawled Corpus"" and achieve a 4x speedup over the T5-XXL model.",True,True,"Fedus, William and Zoph, Barret and Shazeer, Noam",2022.0,,,,Journal of Machine Learning Research,"Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity",Switch Transformers: Scaling to Trillion Parameter Models ...,https://arxiv.org/abs/2101.03961,"View a PDF of the paper titled Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity, by William Fedus and 2 other authors View a PDF of the paper titled Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity, by William Fedus and 2 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle " SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,jiang2023mistral,\cite{jiang2023mistral},Mistral 7B,http://arxiv.org/abs/2310.06825v1,"We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses the Llama 2 13B -- Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.",True,True,"Jiang, Albert Q and Sablayrolles, Alexandre and Mensch, Arthur and Bamford, Chris and Chaplot, Devendra Singh and Casas, Diego de las and Bressand, Florian and Lengyel, Gianna and Lample, Guillaume and Saulnier, Lucile and others",2023.0,,,,arXiv preprint arXiv:2310.06825,Mistral 7B,Mistral 7B,http://arxiv.org/pdf/2310.06825v1,"We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses the Llama 2 13B -- Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license." SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,2506.01833v1,deepseek,\cite{deepseek},"Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model",,,True,False,"Liu, Aixin and Feng, Bei and Wang, Bin and Wang, Bingxuan and Liu, Bo and Zhao, Chenggang and Dengr, Chengqi and Ruan, Chong and Dai, Damai and Guo, Daya and others",2024.0,,,,arXiv preprint arXiv:2405.04434,"Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model","DeepSeek-V2: A Strong, Economical, and Efficient Mixture ... - GitHub",https://github.com/deepseek-ai/DeepSeek-V2,"GitHub - deepseek-ai/DeepSeek-V2: DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model Image 1: DeepSeek-V2 | DeepSeek-V2-Lite-Chat (SFT) | 16B | 2.4B | 32k | 🤗 HuggingFace | | DeepSeek-V2-Chat (RL) | 236B | 21B | 128k | 🤗 HuggingFace | We evaluate our model on AlpacaEval 2.0 and MTBench, showing the competitive performance of DeepSeek-V2-Chat-RL on English conversation generation. | DeepSeek-V2-Lite 16B Chat | 开源 | 6.01 | 4.71 | 7.32 | model_name = ""deepseek-ai/DeepSeek-V2"" model_name = ""deepseek-ai/DeepSeek-V2-Chat"" python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V2-Chat --tp 8 --trust-remote-code python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V2-Lite-Chat --trust-remote-code --enable-torch-compile python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V2-Chat --tp 8 --trust-remote-code --quant fp8 --kv-cache-dtype fp8_e5m2 model_name = ""deepseek-ai/DeepSeek-V2-Chat"" model='deepseek-chat', The use of DeepSeek-V2 Base/Chat models is subject to the Model License." "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/nips/MorcosRB18,\cite{DBLP:conf/nips/MorcosRB18},"Insights on representational similarity in neural networks with canonical correlation",http://arxiv.org/abs/1806.05759v3,"Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method (Raghu et al., 2017). We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms. Together, these results provide new insights into the function of CNNs and RNNs, and demonstrate the utility of using CCA to understand representations.",True,True,"Ari S. Morcos and Maithra Raghu and Samy Bengio",2018.0,,https://proceedings.neurips.cc/paper/2018/hash/a7a3d70c6d17a73140918996d03c014f-Abstract.html,,,"Insights on representational similarity in neural networks with canonical correlation",Reviews: Insights on representational similarity in neural ...,https://proceedings.neurips.cc/paper/2018/file/a7a3d70c6d17a73140918996d03c014f-Reviews.html,This paper presents projection weighted canonical correlation analysis (CCA) as a method to interrogate neural network representations. "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/nips/RaghuGYS17,\cite{DBLP:conf/nips/RaghuGYS17},"SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability",http://arxiv.org/abs/1706.05806v2,"We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods). We deploy this tool to measure the intrinsic dimensionality of layers, showing in some cases needless over-parameterization; to probe learning dynamics throughout training, finding that networks converge to final representations from the bottom up; to show where class-specific information in networks is formed; and to suggest new training regimes that simultaneously save computation and overfit less. Code: https://github.com/google/svcca/",True,True,"Maithra Raghu and Justin Gilmer and Jason Yosinski and Jascha Sohl{-}Dickstein",2017.0,,https://proceedings.neurips.cc/paper/2017/hash/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html,,,"SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability",SVCCA: Singular Vector Canonical Correlation Analysis for ...,http://papers.neurips.cc/paper/7188-svcca-singular-vector-canonical-correlation-analysis-for-deep-learning-dynamics-and-interpretability.pdf,"by M Raghu · Cited by 831 — We propose a new technique, Singular Vector Canonical Correlation Analysis. (SVCCA), a tool for quickly comparing two representations in a way that is both.See more" "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/icml/Kornblith0LH19,\cite{DBLP:conf/icml/Kornblith0LH19},Similarity of Neural Network Representations Revisited,http://arxiv.org/abs/1905.00414v4,"Recent work has sought to understand the behavior of neural networks by comparing representations between layers and between different trained models. We examine methods for comparing neural network representations based on canonical correlation analysis (CCA). We show that CCA belongs to a family of statistics for measuring multivariate similarity, but that neither CCA nor any other statistic that is invariant to invertible linear transformation can measure meaningful similarities between representations of higher dimension than the number of data points. We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation. This similarity index is equivalent to centered kernel alignment (CKA) and is also closely connected to CCA. Unlike CCA, CKA can reliably identify correspondences between representations in networks trained from different initializations.",True,True,"Simon Kornblith and Mohammad Norouzi and Honglak Lee and Geoffrey E. Hinton",2019.0,,http://proceedings.mlr.press/v97/kornblith19a.html,,,Similarity of Neural Network Representations Revisited,Similarity of Neural Network Representations Revisited,http://arxiv.org/pdf/1905.00414v4,"Recent work has sought to understand the behavior of neural networks by comparing representations between layers and between different trained models. We examine methods for comparing neural network representations based on canonical correlation analysis (CCA). We show that CCA belongs to a family of statistics for measuring multivariate similarity, but that neither CCA nor any other statistic that is invariant to invertible linear transformation can measure meaningful similarities between representations of higher dimension than the number of data points. We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation. This similarity index is equivalent to centered kernel alignment (CKA) and is also closely connected to CCA. Unlike CCA, CKA can reliably identify correspondences between representations in networks trained from different initializations." "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/iclr/NguyenRK21,\cite{DBLP:conf/iclr/NguyenRK21},"Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth",,,True,False,"Thao Nguyen and Maithra Raghu and Simon Kornblith",2021.0,,https://openreview.net/forum?id=KJNcAkY8tY4,,,"Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth",Do Wide and Deep Networks Learn the Same Things? Uncovering ...,https://openreview.net/forum?id=KJNcAkY8tY4,"This paper studies whether neural networks with different architectures, especially different width and depth, learn similar representations." "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,phang-etal-2021-fine,\cite{phang-etal-2021-fine},"Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers",http://arxiv.org/abs/2109.08406v2,"Despite the success of fine-tuning pretrained language encoders like BERT for downstream natural language understanding (NLU) tasks, it is still poorly understood how neural networks change after fine-tuning. In this work, we use centered kernel alignment (CKA), a method for comparing learned representations, to measure the similarity of representations in task-tuned models across layers. In experiments across twelve NLU tasks, we discover a consistent block diagonal structure in the similarity of representations within fine-tuned RoBERTa and ALBERT models, with strong similarity within clusters of earlier and later layers, but not between them. The similarity of later layer representations implies that later layers only marginally contribute to task performance, and we verify in experiments that the top few layers of fine-tuned Transformers can be discarded without hurting performance, even with no further tuning.",True,True,"Phang, Jason and Liu, Haokun and Bowman, Samuel R.",2021.0,,https://aclanthology.org/2021.blackboxnlp-1.42/,10.18653/v1/2021.blackboxnlp-1.42,,"Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers",[PDF] Fine-Tuned Transformers Show Clusters of Similar Representations ...,https://aclanthology.org/2021.blackboxnlp-1.42.pdf,"In this work, we study how learned representa- tions change through fine-tuning by studying the similarity of representations between layers of" "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/nips/LiuCYY24,\cite{DBLP:conf/nips/LiuCYY24},"Exploring Consistency in Graph Representations:from Graph Kernels to Graph Neural Networks",http://arxiv.org/abs/2410.23748v2,"Graph Neural Networks (GNNs) have emerged as a dominant approach in graph representation learning, yet they often struggle to capture consistent similarity relationships among graphs. While graph kernel methods such as the Weisfeiler-Lehman subtree (WL-subtree) and Weisfeiler-Lehman optimal assignment (WLOA) kernels are effective in capturing similarity relationships, they rely heavily on predefined kernels and lack sufficient non-linearity for more complex data patterns. Our work aims to bridge the gap between neural network methods and kernel approaches by enabling GNNs to consistently capture relational structures in their learned representations. Given the analogy between the message-passing process of GNNs and WL algorithms, we thoroughly compare and analyze the properties of WL-subtree and WLOA kernels. We find that the similarities captured by WLOA at different iterations are asymptotically consistent, ensuring that similar graphs remain similar in subsequent iterations, thereby leading to superior performance over the WL-subtree kernel. Inspired by these findings, we conjecture that the consistency in the similarities of graph representations across GNN layers is crucial in capturing relational structures and enhancing graph classification performance. Thus, we propose a loss to enforce the similarity of graph representations to be consistent across different layers. Our empirical analysis verifies our conjecture and shows that our proposed consistency loss can significantly enhance graph classification performance across several GNN backbones on various datasets.",True,True,"Xuyuan Liu and Yinghao Cai and Qihui Yang and Yujun Yan",2024.0,,http://papers.nips.cc/paper\_files/paper/2024/hash/f631e778fd3c1b871e9e3a94369335e9-Abstract-Conference.html,,,"Exploring Consistency in Graph Representations:from Graph Kernels to Graph Neural Networks",[2410.23748] Exploring Consistency in Graph Representations:from ...,https://arxiv.org/abs/2410.23748,Our work aims to bridge the gap between neural network methods and kernel approaches by enabling GNNs to consistently capture relational structures in their "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/emnlp/BrownGKTK23,\cite{DBLP:conf/emnlp/BrownGKTK23},"Understanding the Inner Workings of Language Models Through Representation Dissimilarity",http://arxiv.org/abs/2310.14993v1,"As language models are applied to an increasing number of real-world applications, understanding their inner workings has become an important issue in model trust, interpretability, and transparency. In this work we show that representation dissimilarity measures, which are functions that measure the extent to which two model's internal representations differ, can be a valuable tool for gaining insight into the mechanics of language models. Among our insights are: (i) an apparent asymmetry in the internal representations of model using SoLU and GeLU activation functions, (ii) evidence that dissimilarity measures can identify and locate generalization properties of models that are invisible via in-distribution test set performance, and (iii) new evaluations of how language model features vary as width and depth are increased. Our results suggest that dissimilarity measures are a promising set of tools for shedding light on the inner workings of language models.",True,True,"Davis Brown and Charles Godfrey and Nicholas Konz and Jonathan H. Tu and Henry Kvinge",2023.0,,https://doi.org/10.18653/v1/2023.emnlp-main.403,10.18653/V1/2023.EMNLP-MAIN.403,,"Understanding the Inner Workings of Language Models Through Representation Dissimilarity",Understanding the Inner-workings of Language Models ...,https://openreview.net/forum?id=bZel7wM6fN¬eId=6nDMKGYtp0,"In this work we show that representation dissimilarity measures, which are functions that measure the extent to which two model's internal representations" "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,sun2024massive,\cite{sun2024massive},Massive Activations in Large Language Models,http://arxiv.org/abs/2402.17762v2,"We observe an empirical phenomenon in Large Language Models (LLMs) -- very few activations exhibit significantly larger values than others (e.g., 100,000 times larger). We call them massive activations. First, we demonstrate the widespread existence of massive activations across various LLMs and characterize their locations. Second, we find their values largely stay constant regardless of the input, and they function as indispensable bias terms in LLMs. Third, these massive activations lead to the concentration of attention probabilities to their corresponding tokens, and further, implicit bias terms in the self-attention output. Last, we also study massive activations in Vision Transformers. Code is available at https://github.com/locuslab/massive-activations.",True,True,Mingjie Sun and Xinlei Chen and J Zico Kolter and Zhuang Liu,2024.0,,https://openreview.net/forum?id=F7aAhfitX6,,,Massive Activations in Large Language Models,Massive Activations in Large Language Models,http://arxiv.org/pdf/2402.17762v2,"We observe an empirical phenomenon in Large Language Models (LLMs) -- very few activations exhibit significantly larger values than others (e.g., 100,000 times larger). We call them massive activations. First, we demonstrate the widespread existence of massive activations across various LLMs and characterize their locations. Second, we find their values largely stay constant regardless of the input, and they function as indispensable bias terms in LLMs. Third, these massive activations lead to the concentration of attention probabilities to their corresponding tokens, and further, implicit bias terms in the self-attention output. Last, we also study massive activations in Vision Transformers. Code is available at https://github.com/locuslab/massive-activations." "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/emnlp/MartinezLB24,\cite{DBLP:conf/emnlp/MartinezLB24},"Tending Towards Stability: Convergence Challenges in Small Language Models",http://arxiv.org/abs/2410.11451v1,"Increasing the number of parameters in language models is a common strategy to enhance their performance. However, smaller language models remain valuable due to their lower operational costs. Despite their advantages, smaller models frequently underperform compared to their larger counterparts, even when provided with equivalent data and computational resources. Specifically, their performance tends to degrade in the late pretraining phase. This is anecdotally attributed to their reduced representational capacity. Yet, the exact causes of this performance degradation remain unclear. We use the Pythia model suite to analyse the training dynamics that underlie this phenomenon. Across different model sizes, we investigate the convergence of the Attention and MLP activations to their final state and examine how the effective rank of their parameters influences this process. We find that nearly all layers in larger models stabilise early in training - within the first 20% - whereas layers in smaller models exhibit slower and less stable convergence, especially when their parameters have lower effective rank. By linking the convergence of layers' activations to their parameters' effective rank, our analyses can guide future work to address inefficiencies in the learning dynamics of small models.",True,True,"Richard Diehl Martinez and Pietro Lesci and Paula Buttery",2024.0,,https://aclanthology.org/2024.findings-emnlp.187,,,"Tending Towards Stability: Convergence Challenges in Small Language Models",Convergence Challenges in Small Language Models - arXiv,https://arxiv.org/abs/2410.11451,Abstract page for arXiv paper 2410.11451: Tending Towards Stability: Convergence Challenges in Small Language Models. "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/nips/MengBAB22,\cite{DBLP:conf/nips/MengBAB22},Locating and Editing Factual Associations in GPT,http://arxiv.org/abs/2202.05262v5,"We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuron activations that are decisive in a model's factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forward weights to update specific factual associations using Rank-One Model Editing (ROME). We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task, comparable to existing methods. To perform a more sensitive evaluation, we also evaluate ROME on a new dataset of counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available at https://rome.baulab.info/",True,True,"Kevin Meng and David Bau and Alex Andonian and Yonatan Belinkov",2022.0,,http://papers.nips.cc/paper\_files/paper/2022/hash/6f1d43d5a82a37e89b0665b33bf3a182-Abstract-Conference.html,,,Locating and Editing Factual Associations in GPT,Locating and Editing Factual Associations in GPT,http://arxiv.org/pdf/2202.05262v5,"We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuron activations that are decisive in a model's factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forward weights to update specific factual associations using Rank-One Model Editing (ROME). We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task, comparable to existing methods. To perform a more sensitive evaluation, we also evaluate ROME on a new dataset of counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available at https://rome.baulab.info/" "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/emnlp/AzariaM23,\cite{DBLP:conf/emnlp/AzariaM23},The Internal State of an LLM Knows When It's Lying,http://arxiv.org/abs/2304.13734v2,"While Large Language Models (LLMs) have shown exceptional performance in various tasks, one of their most prominent drawbacks is generating inaccurate or false information with a confident tone. In this paper, we provide evidence that the LLM's internal state can be used to reveal the truthfulness of statements. This includes both statements provided to the LLM, and statements that the LLM itself generates. Our approach is to train a classifier that outputs the probability that a statement is truthful, based on the hidden layer activations of the LLM as it reads or generates the statement. Experiments demonstrate that given a set of test sentences, of which half are true and half false, our trained classifier achieves an average of 71\% to 83\% accuracy labeling which sentences are true versus false, depending on the LLM base model. Furthermore, we explore the relationship between our classifier's performance and approaches based on the probability assigned to the sentence by the LLM. We show that while LLM-assigned sentence probability is related to sentence truthfulness, this probability is also dependent on sentence length and the frequencies of words in the sentence, resulting in our trained classifier providing a more reliable approach to detecting truthfulness, highlighting its potential to enhance the reliability of LLM-generated content and its practical applicability in real-world scenarios.",True,True,"Amos Azaria and Tom M. Mitchell",2023.0,,https://doi.org/10.18653/v1/2023.findings-emnlp.68,10.18653/V1/2023.FINDINGS-EMNLP.68,,The Internal State of an LLM Knows When It's Lying,The Internal State of an LLM Knows When It's Lying,http://arxiv.org/pdf/2304.13734v2,"While Large Language Models (LLMs) have shown exceptional performance in various tasks, one of their most prominent drawbacks is generating inaccurate or false information with a confident tone. In this paper, we provide evidence that the LLM's internal state can be used to reveal the truthfulness of statements. This includes both statements provided to the LLM, and statements that the LLM itself generates. Our approach is to train a classifier that outputs the probability that a statement is truthful, based on the hidden layer activations of the LLM as it reads or generates the statement. Experiments demonstrate that given a set of test sentences, of which half are true and half false, our trained classifier achieves an average of 71\% to 83\% accuracy labeling which sentences are true versus false, depending on the LLM base model. Furthermore, we explore the relationship between our classifier's performance and approaches based on the probability assigned to the sentence by the LLM. We show that while LLM-assigned sentence probability is related to sentence truthfulness, this probability is also dependent on sentence length and the frequencies of words in the sentence, resulting in our trained classifier providing a more reliable approach to detecting truthfulness, highlighting its potential to enhance the reliability of LLM-generated content and its practical applicability in real-world scenarios." "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/emnlp/ChenTGW00YY24,\cite{DBLP:conf/emnlp/ChenTGW00YY24},Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge Injection,http://arxiv.org/abs/2410.02330v1,"As a manner to augment pre-trained large language models (LLM), knowledge injection is critical to develop vertical domain large models and has been widely studied. Although most current approaches, including parameter-efficient fine-tuning (PEFT) and block expansion methods, uniformly apply knowledge across all LLM layers, it raises the question: are all layers equally crucial for knowledge injection? We begin by evaluating the importance of each layer in finding the optimal layer range for knowledge injection. Intuitively, the more important layers should play a more critical role in knowledge injection and deserve a denser injection. We observe performance dips in question-answering benchmarks after the removal or expansion of the shallow layers, and the degradation shrinks as the layer gets deeper, indicating that the shallow layers hold the key to knowledge injection. This insight leads us to propose the S strategy, a post-pretraining strategy of selectively enhancing shallow layers while pruning the less effective deep ones. Based on this strategy, we introduce Llama Slayer-8B and Llama Slayer-8B-Instruct. We experimented on the corpus of code $\&$ math and demonstrated the effectiveness of our strategy. Further experiments across different LLM, Mistral-7B, and a legal corpus confirmed the general applicability of the approach, underscoring its wide-ranging efficacy. Our code is available at: \https://github.com/txchen-USTC/Llama-Slayer",True,True,"Tianxiang Chen and Zhentao Tan and Tao Gong and Yue Wu and Qi Chu and Bin Liu and Jieping Ye and Nenghai Yu",2024.0,,https://aclanthology.org/2024.findings-emnlp.347,,,Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge Injection,Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge Injection,http://arxiv.org/pdf/2410.02330v1,"As a manner to augment pre-trained large language models (LLM), knowledge injection is critical to develop vertical domain large models and has been widely studied. Although most current approaches, including parameter-efficient fine-tuning (PEFT) and block expansion methods, uniformly apply knowledge across all LLM layers, it raises the question: are all layers equally crucial for knowledge injection? We begin by evaluating the importance of each layer in finding the optimal layer range for knowledge injection. Intuitively, the more important layers should play a more critical role in knowledge injection and deserve a denser injection. We observe performance dips in question-answering benchmarks after the removal or expansion of the shallow layers, and the degradation shrinks as the layer gets deeper, indicating that the shallow layers hold the key to knowledge injection. This insight leads us to propose the S strategy, a post-pretraining strategy of selectively enhancing shallow layers while pruning the less effective deep ones. Based on this strategy, we introduce Llama Slayer-8B and Llama Slayer-8B-Instruct. We experimented on the corpus of code $\&$ math and demonstrated the effectiveness of our strategy. Further experiments across different LLM, Mistral-7B, and a legal corpus confirmed the general applicability of the approach, underscoring its wide-ranging efficacy. Our code is available at: \https://github.com/txchen-USTC/Llama-Slayer" "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:conf/emnlp/ZhaoLLZ024,\cite{DBLP:conf/emnlp/ZhaoLLZ024},"Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing",,,True,False,"Wei Zhao and Zhe Li and Yige Li and Ye Zhang and Jun Sun",2024.0,,https://aclanthology.org/2024.findings-emnlp.293,,,"Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing",Defending Large Language Models Against Jailbreak Attacks via ...,https://aclanthology.org/2024.findings-emnlp.293/,"In this work, we investigate how LLMs respond to harmful prompts and propose a novel defense method termed Layer-specific Editing (LED) to enhance the" "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,jin-etal-2025-exploring,\cite{jin-etal-2025-exploring},"Exploring Concept Depth: How Large Language Models Acquire Knowledge and Concept at Different Layers?",http://arxiv.org/abs/2404.07066v7,"Large language models (LLMs) have shown remarkable performances across a wide range of tasks. However, the mechanisms by which these models encode tasks of varying complexities remain poorly understood. In this paper, we explore the hypothesis that LLMs process concepts of varying complexities in different layers, introducing the idea of ""Concept Depth"" to suggest that more complex concepts are typically acquired in deeper layers. Specifically, we categorize concepts based on their level of abstraction, defining them in the order of increasing complexity within factual, emotional, and inferential tasks. We conduct extensive probing experiments using layer-wise representations across various LLM families (Gemma, LLaMA, Qwen) on various datasets spanning the three domains of tasks. Our findings reveal that models could efficiently conduct probing for simpler tasks in shallow layers, and more complex tasks typically necessitate deeper layers for accurate understanding. Additionally, we examine how external factors, such as adding noise to the input and quantizing the model weights, might affect layer-wise representations. Our findings suggest that these factors can impede the development of a conceptual understanding of LLMs until deeper layers are explored. We hope that our proposed concept and experimental insights will enhance the understanding of the mechanisms underlying LLMs. Our codes are available at https://github.com/Luckfort/CD.",True,True,"Jin, Mingyu and Yu, Qinkai and Huang, Jingyuan and Zeng, Qingcheng and Wang, Zhenting and Hua, Wenyue and Zhao, Haiyan and Mei, Kai and Meng, Yanda and Ding, Kaize and Yang, Fan and Du, Mengnan and Zhang, Yongfeng",2025.0,,https://aclanthology.org/2025.coling-main.37/,,,"Exploring Concept Depth: How Large Language Models Acquire Knowledge and Concept at Different Layers?",Exploring Concept Depth: How Large Language Models ...,https://aclanthology.org/2025.coling-main.37.pdf,"by M Jin · 2025 · Cited by 30 — In this paper, we design a probing framework to understand how concepts at various levels are en- coded within LLMs and investigate whether the" "Spectral Insights into Data-Oblivious Critical Layers in Large Language Models",2506.00382v1,DBLP:journals/corr/abs-2412-09563,\cite{DBLP:journals/corr/abs-2412-09563},"Does Representation Matter? Exploring Intermediate Layers in Large Language Models",http://arxiv.org/abs/2412.09563v1,"Understanding what defines a good representation in large language models (LLMs) is fundamental to both theoretical understanding and practical applications. In this paper, we investigate the quality of intermediate representations in various LLM architectures, including Transformers and State Space Models (SSMs). We find that intermediate layers often yield more informative representations for downstream tasks than the final layers. To measure the representation quality, we adapt and apply a suite of metrics - such as prompt entropy, curvature, and augmentation-invariance - originally proposed in other contexts. Our empirical study reveals significant architectural differences, how representations evolve throughout training, and how factors like input randomness and prompt length affect each layer. Notably, we observe a bimodal pattern in the entropy of some intermediate layers and consider potential explanations tied to training data. Overall, our results illuminate the internal mechanics of LLMs and guide strategies for architectural optimization and training.",True,True,"Oscar Skean and Md Rifat Arefin and Yann LeCun and Ravid Shwartz{-}Ziv",2024.0,,https://doi.org/10.48550/arXiv.2412.09563,10.48550/ARXIV.2412.09563,CoRR,"Does Representation Matter? Exploring Intermediate Layers in Large Language Models",Does Representation Matter? Exploring Intermediate ...,https://openreview.net/forum?id=FN0tZ9pVLz&referrer=%5Bthe%20profile%20of%20Ravid%20Shwartz-Ziv%5D(%2Fprofile%3Fid%3D~Ravid_Shwartz-Ziv2),We find that intermediate layers consistently provide better representations for downstream tasks compared to final layers.See more "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,rusu2016progressive,\cite{rusu2016progressive},Progressive Neural Networks,http://arxiv.org/abs/1606.04671v4,"Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.",True,True,"Rusu, Andrei A and Rabinowitz, Neil C and Desjardins, Guillaume and Soyer, Hubert and Kirkpatrick, James and Kavukcuoglu, Koray and Pascanu, Razvan and Hadsell, Raia",2016.0,,,,arXiv preprint arXiv:1606.04671,Progressive Neural Networks,Progressive Neural Networks,http://arxiv.org/pdf/1606.04671v4,"Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,rypescdivide,\cite{rypescdivide},"Divide and not forget: Ensemble of selectively trained experts in Continual Learning",http://arxiv.org/abs/2401.10191v3,"Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation, we introduce a novel approach named SEED. SEED selects only one, the most optimal expert for a considered task, and uses data from this task to fine-tune only this expert. For this purpose, each expert represents each class with a Gaussian distribution, and the optimal expert is selected based on the similarity of those distributions. Consequently, SEED increases diversity and heterogeneity within the experts while maintaining the high stability of this ensemble method. The extensive experiments demonstrate that SEED achieves state-of-the-art performance in exemplar-free settings across various scenarios, showing the potential of expert diversification through data in continual learning.",True,True,"Rype{\'s}{\'c}, Grzegorz and Cygert, Sebastian and Khan, Valeriya and Trzcinski, Tomasz and Zieli{\'n}ski, Bartosz Micha{\l} and Twardowski, Bart{\l}omiej",2024.0,,,,,"Divide and not forget: Ensemble of selectively trained experts in Continual Learning",Ensemble of selectively trained experts in Continual Learning - arXiv,https://arxiv.org/abs/2401.10191,"Divide and not forget: Ensemble of selectively trained experts in Continual Learning. Authors:Grzegorz Rypeść, Sebastian Cygert, Valeriya Khan," "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,kirkpatrick2017overcoming,\cite{kirkpatrick2017overcoming},Overcoming catastrophic forgetting in neural networks,http://arxiv.org/abs/1612.00796v2,"The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.",True,True,"Kirkpatrick, James and Pascanu, Razvan and Rabinowitz, Neil and Veness, Joel and Desjardins, Guillaume and Rusu, Andrei A and Milan, Kieran and Quan, John and Ramalho, Tiago and Grabska-Barwinska, Agnieszka and others",2017.0,,,,Proceedings of the national academy of sciences,Overcoming catastrophic forgetting in neural networks,Overcoming catastrophic forgetting in neural networks,http://arxiv.org/pdf/1612.00796v2,"The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,magistrielastic,\cite{magistrielastic},"Elastic Feature Consolidation for Cold Start Exemplar-Free Incremental Learning",http://arxiv.org/abs/2402.03917v3,"Exemplar-Free Class Incremental Learning (EFCIL) aims to learn from a sequence of tasks without having access to previous task data. In this paper, we consider the challenging Cold Start scenario in which insufficient data is available in the first task to learn a high-quality backbone. This is especially challenging for EFCIL since it requires high plasticity, which results in feature drift which is difficult to compensate for in the exemplar-free setting. To address this problem, we propose a simple and effective approach that consolidates feature representations by regularizing drift in directions highly relevant to previous tasks and employs prototypes to reduce task-recency bias. Our method, called Elastic Feature Consolidation (EFC), exploits a tractable second-order approximation of feature drift based on an Empirical Feature Matrix (EFM). The EFM induces a pseudo-metric in feature space which we use to regularize feature drift in important directions and to update Gaussian prototypes used in a novel asymmetric cross entropy loss which effectively balances prototype rehearsal with data from new tasks. Experimental results on CIFAR-100, Tiny-ImageNet, ImageNet-Subset and ImageNet-1K demonstrate that Elastic Feature Consolidation is better able to learn new tasks by maintaining model plasticity and significantly outperform the state-of-the-art.",True,True,"Magistri, Simone and Trinci, Tomaso and Soutif, Albin and van de Weijer, Joost and Bagdanov, Andrew D",2024.0,,,,,"Elastic Feature Consolidation for Cold Start Exemplar-Free Incremental Learning",[2402.03917] Elastic Feature Consolidation for Cold Start Exemplar ...,https://arxiv.org/abs/2402.03917,Exemplar-Free Class Incremental Learning (EFCIL) aims to learn from a sequence of tasks without having access to previous task data. In this "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,saha2021gradient,\cite{saha2021gradient},Gradient Projection Memory for Continual Learning,http://arxiv.org/abs/2103.09762v1,"The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems. Existing approaches to enable such learning in artificial neural networks usually rely on network growth, importance based weight update or replay of old data from the memory. In contrast, we propose a novel approach where a neural network learns new tasks by taking gradient steps in the orthogonal direction to the gradient subspaces deemed important for the past tasks. We find the bases of these subspaces by analyzing network representations (activations) after learning each task with Singular Value Decomposition (SVD) in a single shot manner and store them in the memory as Gradient Projection Memory (GPM). With qualitative and quantitative analyses, we show that such orthogonal gradient descent induces minimum to no interference with the past tasks, thereby mitigates forgetting. We evaluate our algorithm on diverse image classification datasets with short and long sequences of tasks and report better or on-par performance compared to the state-of-the-art approaches.",True,True,"Saha, Gobinda and Garg, Isha and Roy, Kaushik",2021.0,,,,,Gradient Projection Memory for Continual Learning,Gradient Projection Memory for Continual Learning,http://arxiv.org/pdf/2103.09762v1,"The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems. Existing approaches to enable such learning in artificial neural networks usually rely on network growth, importance based weight update or replay of old data from the memory. In contrast, we propose a novel approach where a neural network learns new tasks by taking gradient steps in the orthogonal direction to the gradient subspaces deemed important for the past tasks. We find the bases of these subspaces by analyzing network representations (activations) after learning each task with Singular Value Decomposition (SVD) in a single shot manner and store them in the memory as Gradient Projection Memory (GPM). With qualitative and quantitative analyses, we show that such orthogonal gradient descent induces minimum to no interference with the past tasks, thereby mitigates forgetting. We evaluate our algorithm on diverse image classification datasets with short and long sequences of tasks and report better or on-par performance compared to the state-of-the-art approaches." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,lin2022trgp,\cite{lin2022trgp},TRGP: Trust Region Gradient Projection for Continual Learning,http://arxiv.org/abs/2202.02931v1,"Catastrophic forgetting is one of the major challenges in continual learning. To address this issue, some existing methods put restrictive constraints on the optimization space of the new task for minimizing the interference to old tasks. However, this may lead to unsatisfactory performance for the new task, especially when the new task is strongly correlated with old tasks. To tackle this challenge, we propose Trust Region Gradient Projection (TRGP) for continual learning to facilitate the forward knowledge transfer based on an efficient characterization of task correlation. Particularly, we introduce a notion of `trust region' to select the most related old tasks for the new task in a layer-wise and single-shot manner, using the norm of gradient projection onto the subspace spanned by task inputs. Then, a scaled weight projection is proposed to cleverly reuse the frozen weights of the selected old tasks in the trust region through a layer-wise scaling matrix. By jointly optimizing the scaling matrices and the model, where the model is updated along the directions orthogonal to the subspaces of old tasks, TRGP can effectively prompt knowledge transfer without forgetting. Extensive experiments show that our approach achieves significant improvement over related state-of-the-art methods.",True,True,"Lin, Sen and Yang, Li and Fan, Deliang and Zhang, Junshan",2022.0,,,,International Conference on Learning Representations(ICLR),TRGP: Trust Region Gradient Projection for Continual Learning,TRGP: Trust Region Gradient Projection for Continual Learning,http://arxiv.org/pdf/2202.02931v1,"Catastrophic forgetting is one of the major challenges in continual learning. To address this issue, some existing methods put restrictive constraints on the optimization space of the new task for minimizing the interference to old tasks. However, this may lead to unsatisfactory performance for the new task, especially when the new task is strongly correlated with old tasks. To tackle this challenge, we propose Trust Region Gradient Projection (TRGP) for continual learning to facilitate the forward knowledge transfer based on an efficient characterization of task correlation. Particularly, we introduce a notion of `trust region' to select the most related old tasks for the new task in a layer-wise and single-shot manner, using the norm of gradient projection onto the subspace spanned by task inputs. Then, a scaled weight projection is proposed to cleverly reuse the frozen weights of the selected old tasks in the trust region through a layer-wise scaling matrix. By jointly optimizing the scaling matrices and the model, where the model is updated along the directions orthogonal to the subspaces of old tasks, TRGP can effectively prompt knowledge transfer without forgetting. Extensive experiments show that our approach achieves significant improvement over related state-of-the-art methods." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,cbrs2020,\cite{cbrs2020},Online continual learning from imbalanced data,,,True,False,"Chrysakis, Aristotelis and Moens, Marie-Francine",2020.0,,,,,Online continual learning from imbalanced data,Online Continual Learning from Imbalanced Data,https://proceedings.mlr.press/v119/chrysakis20a.html,"We aim to evaluate memory population methods that are used in online continual learning, when dealing with highly imbalanced and temporally correlated streams" "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,infors2022,\cite{infors2022},Information-theoretic Online Memory Selection for Continual Learning,http://arxiv.org/abs/2204.04763v1,"A challenging problem in task-free continual learning is the online selection of a representative replay memory from data streams. In this work, we investigate the online memory selection problem from an information-theoretic perspective. To gather the most information, we propose the \textit{surprise} and the \textit{learnability} criteria to pick informative points and to avoid outliers. We present a Bayesian model to compute the criteria efficiently by exploiting rank-one matrix structures. We demonstrate that these criteria encourage selecting informative points in a greedy algorithm for online memory selection. Furthermore, by identifying the importance of \textit{the timing to update the memory}, we introduce a stochastic information-theoretic reservoir sampler (InfoRS), which conducts sampling among selective points with high information. Compared to reservoir sampling, InfoRS demonstrates improved robustness against data imbalance. Finally, empirical performances over continual learning benchmarks manifest its efficiency and efficacy.",True,True,"Sun, Shengyang and Calandriello, Daniele and Hu, Huiyi and Li, Ang and Titsias, Michalis",2022.0,,,,,Information-theoretic Online Memory Selection for Continual Learning,Information-theoretic Online Memory Selection for Continual Learning,https://openreview.net/forum?id=IpctgL7khPp,We present information-theoretic algorithms to tackle the online memory selection problem in task-free and data imbalanced continual learning. "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,shin2017continual,\cite{shin2017continual},Continual Learning with Deep Generative Replay,http://arxiv.org/abs/1705.08690v3,"Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (""generator"") and a task solving model (""solver""). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.",True,True,"Shin, Hanul and Lee, Jung Kwon and Kim, Jaehong and Kim, Jiwon",2017.0,,,,Advances in Neural Information Processing Systems(Neurips),Continual Learning with Deep Generative Replay,Continual Learning with Deep Generative Replay,http://arxiv.org/pdf/1705.08690v3,"Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (""generator"") and a task solving model (""solver""). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,chaudhry2018efficient,\cite{chaudhry2018efficient},Efficient Lifelong Learning with A-GEM,,,True,False,"Chaudhry, Arslan and Ranzato, Marc’Aurelio and Rohrbach, Marcus and Elhoseiny, Mohamed",2018.0,,,,,Efficient Lifelong Learning with A-GEM,Efficient Lifelong Learning with A-GEM,http://arxiv.org/pdf/1812.00420v2,"In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost. Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation. Second, we introduce a new metric measuring how quickly a learner acquires a new skill. Third, we propose an improved version of GEM (Lopez-Paz & Ranzato, 2017), dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC (Kirkpatrick et al., 2016) and other regularization-based methods. Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration. Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,chaudhry2019continual,\cite{chaudhry2019continual},Continual learning with tiny episodic memories,,,True,False,"Dokania, P and Torr, P and Ranzato, M",2019.0,,,,,Continual learning with tiny episodic memories,[PDF] Continual Learning with Tiny Episodic Memories,https://tajanthan.github.io/il/docs/cler.pdf,"In this work, we empirically analyze the effective- ness of a very small episodic memory in a CL setup where each training example is only seen once." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,rebuffi2017icarl,\cite{rebuffi2017icarl},iCaRL: Incremental Classifier and Representation Learning,http://arxiv.org/abs/1611.07725v2,"A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.",True,True,"Rebuffi, Sylvestre-Alvise and Kolesnikov, Alexander and Sperl, Georg and Lampert, Christoph H",2017.0,,,,,iCaRL: Incremental Classifier and Representation Learning,iCaRL: Incremental Classifier and Representation Learning,http://arxiv.org/pdf/1611.07725v2,"A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,gargtic2024,\cite{gargtic2024},TiC-CLIP: Continual Training of CLIP Models,,,True,False,"Garg, Saurabh and Farajtabar, Mehrdad and Pouransari, Hadi and Vemulapalli, Raviteja and Mehta, Sachin and Tuzel, Oncel and Shankar, Vaishaal and Faghri, Fartash",2024.0,,,,,TiC-CLIP: Continual Training of CLIP Models,TiC-CLIP: Continual Training of CLIP Models,http://arxiv.org/pdf/2310.16226v3,"Keeping large foundation models up to date on latest data is inherently expensive. To avoid the prohibitive costs of constantly retraining, it is imperative to continually train these models. This problem is exacerbated by the lack of any large scale continual learning benchmarks or baselines. We introduce the first set of web-scale Time-Continual (TiC) benchmarks for training vision-language models: TiC-DataComp, TiC-YFCC, and TiC-Redcaps. TiC-DataComp, our largest dataset, contains over 12.7B timestamped image-text pairs spanning 9 years (2014-2022). We first use our benchmarks to curate various dynamic evaluations to measure temporal robustness of existing models. We show OpenAI's CLIP (trained on data up to 2020) loses $\approx 8\%$ zero-shot accuracy on our curated retrieval task from 2021-2022 compared with more recently trained models in OpenCLIP repository. We then study how to efficiently train models on time-continuous data. We demonstrate that a simple rehearsal-based approach that continues training from the last checkpoint and replays old data reduces compute by $2.5\times$ when compared to the standard practice of retraining from scratch. Code is available at https://github.com/apple/ml-tic-clip." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,lopez2017gradient,\cite{lopez2017gradient},Gradient Episodic Memory for Continual Learning,http://arxiv.org/abs/1706.08840v6,"One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.",True,True,"Lopez-Paz, David and Ranzato, Marc'Aurelio",2017.0,,,,Advances in Neural Information Processing Systems(Neurips),Gradient Episodic Memory for Continual Learning,Gradient Episodic Memory for Continual Learning,http://arxiv.org/pdf/1706.08840v6,"One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,bennani2020generalisation,\cite{bennani2020generalisation},"Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent",http://arxiv.org/abs/2006.11942v4,"In Continual Learning settings, deep neural networks are prone to Catastrophic Forgetting. Orthogonal Gradient Descent was proposed to tackle the challenge. However, no theoretical guarantees have been proven yet. We present a theoretical framework to study Continual Learning algorithms in the Neural Tangent Kernel regime. This framework comprises closed form expression of the model through tasks and proxies for Transfer Learning, generalisation and tasks similarity. In this framework, we prove that OGD is robust to Catastrophic Forgetting then derive the first generalisation bound for SGD and OGD for Continual Learning. Finally, we study the limits of this framework in practice for OGD and highlight the importance of the Neural Tangent Kernel variation for Continual Learning with OGD.",True,True,"Bennani, Mehdi Abbana and Sugiyama, Masashi",2020.0,,,,,"Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent",Generalisation Guarantees for Continual Learning with Orthogonal ...,https://ui.adsabs.harvard.edu/abs/2020arXiv200611942A/abstract,"In Continual Learning settings, deep neural networks are prone to Catastrophic Forgetting. Orthogonal Gradient Descent was proposed to tackle the challenge." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,doan2021theoretical,\cite{doan2021theoretical},"A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix",http://arxiv.org/abs/2010.04003v2,"Continual learning (CL) is a setting in which an agent has to learn from an incoming stream of data during its entire lifetime. Although major advances have been made in the field, one recurring problem which remains unsolved is that of Catastrophic Forgetting (CF). While the issue has been extensively studied empirically, little attention has been paid from a theoretical angle. In this paper, we show that the impact of CF increases as two tasks increasingly align. We introduce a measure of task similarity called the NTK overlap matrix which is at the core of CF. We analyze common projected gradient algorithms and demonstrate how they mitigate forgetting. Then, we propose a variant of Orthogonal Gradient Descent (OGD) which leverages structure of the data through Principal Component Analysis (PCA). Experiments support our theoretical findings and show how our method can help reduce CF on classical CL datasets.",True,True,"Doan, Thang and Bennani, Mehdi Abbana and Mazoure, Bogdan and Rabusseau, Guillaume and Alquier, Pierre",2021.0,,,,,"A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix",A Theoretical Analysis of Catastrophic Forgetting through the NTK ...,https://arxiv.org/abs/2010.04003,"In this paper, we show that the impact of CF increases as two tasks increasingly align. We introduce a measure of task similarity called the NTK overlap matrix." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,yin2020optimization,\cite{yin2020optimization},"Optimization and Generalization of Regularization-Based Continual Learning: a Loss Approximation Viewpoint",http://arxiv.org/abs/2006.10974v3,"Neural networks have achieved remarkable success in many cognitive tasks. However, when they are trained sequentially on multiple tasks without access to old data, their performance on early tasks tend to drop significantly. This problem is often referred to as catastrophic forgetting, a key challenge in continual learning of neural networks. The regularization-based approach is one of the primary classes of methods to alleviate catastrophic forgetting. In this paper, we provide a novel viewpoint of regularization-based continual learning by formulating it as a second-order Taylor approximation of the loss function of each task. This viewpoint leads to a unified framework that can be instantiated to derive many existing algorithms such as Elastic Weight Consolidation and Kronecker factored Laplace approximation. Based on this viewpoint, we study the optimization aspects (i.e., convergence) as well as generalization properties (i.e., finite-sample guarantees) of regularization-based continual learning. Our theoretical results indicate the importance of accurate approximation of the Hessian matrix. The experimental results on several benchmarks provide empirical validation of our theoretical findings.",True,True,"Yin, Dong and Farajtabar, Mehrdad and Li, Ang and Levine, Nir and Mott, Alex",2020.0,,,,arXiv preprint arXiv:2006.10974,"Optimization and Generalization of Regularization-Based Continual Learning: a Loss Approximation Viewpoint",‪Dong Yin‬ - ‪Google Scholar‬,https://scholar.google.com/citations?user=YtM8P88AAAAJ&hl=en,"Optimization and Generalization of Regularization-Based Continual Learning: a Loss Approximation Viewpoint. D Yin, M Farajtabar, A Li, N Levine, A Mott. arXiv" "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,cao2022provable,\cite{cao2022provable},Provable Lifelong Learning of Representations,http://arxiv.org/abs/2110.14098v2,"In lifelong learning, tasks (or classes) to be learned arrive sequentially over time in arbitrary order. During training, knowledge from previous tasks can be captured and transferred to subsequent ones to improve sample efficiency. We consider the setting where all target tasks can be represented in the span of a small number of unknown linear or nonlinear features of the input data. We propose a lifelong learning algorithm that maintains and refines the internal feature representation. We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation. The resulting sample complexity improves significantly on existing bounds. In the setting of linear features, our algorithm is provably efficient and the sample complexity for input dimension $d$, $m$ tasks with $k$ features up to error $\epsilon$ is $\tilde{O}(dk^{1.5}/\epsilon+km/\epsilon)$. We also prove a matching lower bound for any lifelong learning algorithm that uses a single task learner as a black box. We complement our analysis with an empirical study, including a heuristic lifelong learning algorithm for deep neural networks. Our method performs favorably on challenging realistic image datasets compared to state-of-the-art continual learning methods.",True,True,"Cao, Xinyuan and Liu, Weiyang and Vempala, Santosh S",2022.0,,,,,Provable Lifelong Learning of Representations,Provable Lifelong Learning of Representations,http://arxiv.org/pdf/2110.14098v2,"In lifelong learning, tasks (or classes) to be learned arrive sequentially over time in arbitrary order. During training, knowledge from previous tasks can be captured and transferred to subsequent ones to improve sample efficiency. We consider the setting where all target tasks can be represented in the span of a small number of unknown linear or nonlinear features of the input data. We propose a lifelong learning algorithm that maintains and refines the internal feature representation. We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation. The resulting sample complexity improves significantly on existing bounds. In the setting of linear features, our algorithm is provably efficient and the sample complexity for input dimension $d$, $m$ tasks with $k$ features up to error $\epsilon$ is $\tilde{O}(dk^{1.5}/\epsilon+km/\epsilon)$. We also prove a matching lower bound for any lifelong learning algorithm that uses a single task learner as a black box. We complement our analysis with an empirical study, including a heuristic lifelong learning algorithm for deep neural networks. Our method performs favorably on challenging realistic image datasets compared to state-of-the-art continual learning methods." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,li2022provable,\cite{li2022provable},Provable and Efficient Continual Representation Learning,http://arxiv.org/abs/2203.02026v2,"In continual learning (CL), the goal is to design models that can learn a sequence of tasks without catastrophic forgetting. While there is a rich set of techniques for CL, relatively little understanding exists on how representations built by previous tasks benefit new tasks that are added to the network. To address this, we study the problem of continual representation learning (CRL) where we learn an evolving representation as new tasks arrive. Focusing on zero-forgetting methods where tasks are embedded in subnetworks (e.g., PackNet), we first provide experiments demonstrating CRL can significantly boost sample efficiency when learning new tasks. To explain this, we establish theoretical guarantees for CRL by providing sample complexity and generalization error bounds for new tasks by formalizing the statistical benefits of previously-learned representations. Our analysis and experiments also highlight the importance of the order in which we learn the tasks. Specifically, we show that CL benefits if the initial tasks have large sample size and high ""representation diversity"". Diversity ensures that adding new tasks incurs small representation mismatch and can be learned with few samples while training only few additional nonzero weights. Finally, we ask whether one can ensure each task subnetwork to be efficient during inference time while retaining the benefits of representation learning. To this end, we propose an inference-efficient variation of PackNet called Efficient Sparse PackNet (ESPN) which employs joint channel & weight pruning. ESPN embeds tasks in channel-sparse subnets requiring up to 80% less FLOPs to compute while approximately retaining accuracy and is very competitive with a variety of baselines. In summary, this work takes a step towards data and compute-efficient CL with a representation learning perspective. GitHub page: https://github.com/ucr-optml/CtRL",True,True,"Li, Yingcong and Li, Mingchen and Asif, M Salman and Oymak, Samet",2022.0,,,,arXiv preprint arXiv:2203.02026,Provable and Efficient Continual Representation Learning,Provable and Efficient Continual Representation Learning,http://arxiv.org/pdf/2203.02026v2,"In continual learning (CL), the goal is to design models that can learn a sequence of tasks without catastrophic forgetting. While there is a rich set of techniques for CL, relatively little understanding exists on how representations built by previous tasks benefit new tasks that are added to the network. To address this, we study the problem of continual representation learning (CRL) where we learn an evolving representation as new tasks arrive. Focusing on zero-forgetting methods where tasks are embedded in subnetworks (e.g., PackNet), we first provide experiments demonstrating CRL can significantly boost sample efficiency when learning new tasks. To explain this, we establish theoretical guarantees for CRL by providing sample complexity and generalization error bounds for new tasks by formalizing the statistical benefits of previously-learned representations. Our analysis and experiments also highlight the importance of the order in which we learn the tasks. Specifically, we show that CL benefits if the initial tasks have large sample size and high ""representation diversity"". Diversity ensures that adding new tasks incurs small representation mismatch and can be learned with few samples while training only few additional nonzero weights. Finally, we ask whether one can ensure each task subnetwork to be efficient during inference time while retaining the benefits of representation learning. To this end, we propose an inference-efficient variation of PackNet called Efficient Sparse PackNet (ESPN) which employs joint channel & weight pruning. ESPN embeds tasks in channel-sparse subnets requiring up to 80% less FLOPs to compute while approximately retaining accuracy and is very competitive with a variety of baselines. In summary, this work takes a step towards data and compute-efficient CL with a representation learning perspective. GitHub page: https://github.com/ucr-optml/CtRL" "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,evron2023continual,\cite{evron2023continual},Continual Learning in Linear Classification on Separable Data,http://arxiv.org/abs/2306.03534v1,"We analyze continual learning on a sequence of separable linear classification tasks with binary labels. We show theoretically that learning with weak regularization reduces to solving a sequential max-margin problem, corresponding to a special case of the Projection Onto Convex Sets (POCS) framework. We then develop upper bounds on the forgetting and other quantities of interest under various settings with recurring tasks, including cyclic and random orderings of tasks. We discuss several practical implications to popular training practices like regularization scheduling and weighting. We point out several theoretical differences between our continual classification setting and a recently studied continual regression setting.",True,True,"Evron, Itay and Moroshko, Edward and Buzaglo, Gon and Khriesh, Maroun and Marjieh, Badea and Srebro, Nathan and Soudry, Daniel",2023.0,,,,,Continual Learning in Linear Classification on Separable Data,Continual Learning in Linear Classification on Separable Data,http://arxiv.org/pdf/2306.03534v1,"We analyze continual learning on a sequence of separable linear classification tasks with binary labels. We show theoretically that learning with weak regularization reduces to solving a sequential max-margin problem, corresponding to a special case of the Projection Onto Convex Sets (POCS) framework. We then develop upper bounds on the forgetting and other quantities of interest under various settings with recurring tasks, including cyclic and random orderings of tasks. We discuss several practical implications to popular training practices like regularization scheduling and weighting. We point out several theoretical differences between our continual classification setting and a recently studied continual regression setting." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,evron2022catastrophic,\cite{evron2022catastrophic},How catastrophic can catastrophic forgetting be in linear regression?,http://arxiv.org/abs/2205.09588v2,"To better understand catastrophic forgetting, we study fitting an overparameterized linear model to a sequence of tasks with different input distributions. We analyze how much the model forgets the true labels of earlier tasks after training on subsequent tasks, obtaining exact expressions and bounds. We establish connections between continual learning in the linear setting and two other research areas: alternating projections and the Kaczmarz method. In specific settings, we highlight differences between forgetting and convergence to the offline solution as studied in those areas. In particular, when T tasks in d dimensions are presented cyclically for k iterations, we prove an upper bound of T^2 * min{1/sqrt(k), d/k} on the forgetting. This stands in contrast to the convergence to the offline solution, which can be arbitrarily slow according to existing alternating projection results. We further show that the T^2 factor can be lifted when tasks are presented in a random ordering.",True,True,"Evron, Itay and Moroshko, Edward and Ward, Rachel and Srebro, Nathan and Soudry, Daniel",2022.0,,,,,How catastrophic can catastrophic forgetting be in linear regression?,How catastrophic can catastrophic forgetting be in linear regression?,http://arxiv.org/pdf/2205.09588v2,"To better understand catastrophic forgetting, we study fitting an overparameterized linear model to a sequence of tasks with different input distributions. We analyze how much the model forgets the true labels of earlier tasks after training on subsequent tasks, obtaining exact expressions and bounds. We establish connections between continual learning in the linear setting and two other research areas: alternating projections and the Kaczmarz method. In specific settings, we highlight differences between forgetting and convergence to the offline solution as studied in those areas. In particular, when T tasks in d dimensions are presented cyclically for k iterations, we prove an upper bound of T^2 * min{1/sqrt(k), d/k} on the forgetting. This stands in contrast to the convergence to the offline solution, which can be arbitrarily slow according to existing alternating projection results. We further show that the T^2 factor can be lifted when tasks are presented in a random ordering." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,lin2023theory,\cite{lin2023theory},Theory on Forgetting and Generalization of Continual Learning,http://arxiv.org/abs/2302.05836v1,"Continual learning (CL), which aims to learn a sequence of tasks, has attracted significant recent attention. However, most work has focused on the experimental performance of CL, and theoretical studies of CL are still limited. In particular, there is a lack of understanding on what factors are important and how they affect ""catastrophic forgetting"" and generalization performance. To fill this gap, our theoretical analysis, under overparameterized linear models, provides the first-known explicit form of the expected forgetting and generalization error. Further analysis of such a key result yields a number of theoretical explanations about how overparameterization, task similarity, and task ordering affect both forgetting and generalization error of CL. More interestingly, by conducting experiments on real datasets using deep neural networks (DNNs), we show that some of these insights even go beyond the linear models and can be carried over to practical setups. In particular, we use concrete examples to show that our results not only explain some interesting empirical observations in recent studies, but also motivate better practical algorithm designs of CL.",True,True,"Lin, Sen and Ju, Peizhong and Liang, Yingbin and Shroff, Ness",2023.0,,,,,Theory on Forgetting and Generalization of Continual Learning,Theory on Forgetting and Generalization of Continual Learning,http://arxiv.org/pdf/2302.05836v1,"Continual learning (CL), which aims to learn a sequence of tasks, has attracted significant recent attention. However, most work has focused on the experimental performance of CL, and theoretical studies of CL are still limited. In particular, there is a lack of understanding on what factors are important and how they affect ""catastrophic forgetting"" and generalization performance. To fill this gap, our theoretical analysis, under overparameterized linear models, provides the first-known explicit form of the expected forgetting and generalization error. Further analysis of such a key result yields a number of theoretical explanations about how overparameterization, task similarity, and task ordering affect both forgetting and generalization error of CL. More interestingly, by conducting experiments on real datasets using deep neural networks (DNNs), we show that some of these insights even go beyond the linear models and can be carried over to practical setups. In particular, we use concrete examples to show that our results not only explain some interesting empirical observations in recent studies, but also motivate better practical algorithm designs of CL." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,ding2024understanding,\cite{ding2024understanding},Understanding Forgetting in Continual Learning with Linear Regression,http://arxiv.org/abs/2405.17583v1,"Continual learning, focused on sequentially learning multiple tasks, has gained significant attention recently. Despite the tremendous progress made in the past, the theoretical understanding, especially factors contributing to catastrophic forgetting, remains relatively unexplored. In this paper, we provide a general theoretical analysis of forgetting in the linear regression model via Stochastic Gradient Descent (SGD) applicable to both underparameterized and overparameterized regimes. Our theoretical framework reveals some interesting insights into the intricate relationship between task sequence and algorithmic parameters, an aspect not fully captured in previous studies due to their restrictive assumptions. Specifically, we demonstrate that, given a sufficiently large data size, the arrangement of tasks in a sequence, where tasks with larger eigenvalues in their population data covariance matrices are trained later, tends to result in increased forgetting. Additionally, our findings highlight that an appropriate choice of step size will help mitigate forgetting in both underparameterized and overparameterized settings. To validate our theoretical analysis, we conducted simulation experiments on both linear regression models and Deep Neural Networks (DNNs). Results from these simulations substantiate our theoretical findings.",True,True,"Ding, Meng and Ji, Kaiyi and Wang, Di and Xu, Jinhui",2024.0,,,,,Understanding Forgetting in Continual Learning with Linear Regression,Understanding Forgetting in Continual Learning with Linear Regression,http://arxiv.org/pdf/2405.17583v1,"Continual learning, focused on sequentially learning multiple tasks, has gained significant attention recently. Despite the tremendous progress made in the past, the theoretical understanding, especially factors contributing to catastrophic forgetting, remains relatively unexplored. In this paper, we provide a general theoretical analysis of forgetting in the linear regression model via Stochastic Gradient Descent (SGD) applicable to both underparameterized and overparameterized regimes. Our theoretical framework reveals some interesting insights into the intricate relationship between task sequence and algorithmic parameters, an aspect not fully captured in previous studies due to their restrictive assumptions. Specifically, we demonstrate that, given a sufficiently large data size, the arrangement of tasks in a sequence, where tasks with larger eigenvalues in their population data covariance matrices are trained later, tends to result in increased forgetting. Additionally, our findings highlight that an appropriate choice of step size will help mitigate forgetting in both underparameterized and overparameterized settings. To validate our theoretical analysis, we conducted simulation experiments on both linear regression models and Deep Neural Networks (DNNs). Results from these simulations substantiate our theoretical findings." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,goldfarb2023analysis,\cite{goldfarb2023analysis},"Analysis of Catastrophic Forgetting for Random Orthogonal Transformation Tasks in the Overparameterized Regime",http://arxiv.org/abs/2207.06475v1,"Overparameterization is known to permit strong generalization performance in neural networks. In this work, we provide an initial theoretical analysis of its effect on catastrophic forgetting in a continual learning setup. We show experimentally that in permuted MNIST image classification tasks, the generalization performance of multilayer perceptrons trained by vanilla stochastic gradient descent can be improved by overparameterization, and the extent of the performance increase achieved by overparameterization is comparable to that of state-of-the-art continual learning algorithms. We provide a theoretical explanation of this effect by studying a qualitatively similar two-task linear regression problem, where each task is related by a random orthogonal transformation. We show that when a model is trained on the two tasks in sequence without any additional regularization, the risk gain on the first task is small if the model is sufficiently overparameterized.",True,True,"Goldfarb, Daniel and Hand, Paul",2023.0,,,,,"Analysis of Catastrophic Forgetting for Random Orthogonal Transformation Tasks in the Overparameterized Regime",[PDF] Analysis of Catastrophic Forgetting for Random Orthogonal ...,https://proceedings.mlr.press/v206/goldfarb23a/goldfarb23a.pdf,Missing: 04/08/2025 "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,zhao2024statistical,\cite{zhao2024statistical},A Statistical Theory of Regularization-Based Continual Learning,http://arxiv.org/abs/2406.06213v1,"We provide a statistical analysis of regularization-based continual learning on a sequence of linear regression tasks, with emphasis on how different regularization terms affect the model performance. We first derive the convergence rate for the oracle estimator obtained as if all data were available simultaneously. Next, we consider a family of generalized $\ell_2$-regularization algorithms indexed by matrix-valued hyperparameters, which includes the minimum norm estimator and continual ridge regression as special cases. As more tasks are introduced, we derive an iterative update formula for the estimation error of generalized $\ell_2$-regularized estimators, from which we determine the hyperparameters resulting in the optimal algorithm. Interestingly, the choice of hyperparameters can effectively balance the trade-off between forward and backward knowledge transfer and adjust for data heterogeneity. Moreover, the estimation error of the optimal algorithm is derived explicitly, which is of the same order as that of the oracle estimator. In contrast, our lower bounds for the minimum norm estimator and continual ridge regression show their suboptimality. A byproduct of our theoretical analysis is the equivalence between early stopping and generalized $\ell_2$-regularization in continual learning, which may be of independent interest. Finally, we conduct experiments to complement our theory.",True,True,"Zhao, Xuyang and Wang, Huiyuan and Huang, Weiran and Lin, Wei",2024.0,,,,,A Statistical Theory of Regularization-Based Continual Learning,[PDF] A Statistical Theory of Regularization-Based Continual Learning,https://openreview.net/pdf?id=A54CXWn9VB,"We provide a statistical analysis of regularization- based continual learning on a sequence of linear regression tasks, with emphasis on how differ-." "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,li2024theory,\cite{li2024theory},Theory on Mixture-of-Experts in Continual Learning,http://arxiv.org/abs/2406.16437v3,"Continual learning (CL) has garnered significant attention because of its ability to adapt to new tasks that arrive over time. Catastrophic forgetting (of old tasks) has been identified as a major issue in CL, as the model adapts to new tasks. The Mixture-of-Experts (MoE) model has recently been shown to effectively mitigate catastrophic forgetting in CL, by employing a gating network to sparsify and distribute diverse tasks among multiple experts. However, there is a lack of theoretical analysis of MoE and its impact on the learning performance in CL. This paper provides the first theoretical results to characterize the impact of MoE in CL via the lens of overparameterized linear regression tasks. We establish the benefit of MoE over a single expert by proving that the MoE model can diversify its experts to specialize in different tasks, while its router learns to select the right expert for each task and balance the loads across all experts. Our study further suggests an intriguing fact that the MoE in CL needs to terminate the update of the gating network after sufficient training rounds to attain system convergence, which is not needed in the existing MoE studies that do not consider the continual task arrival. Furthermore, we provide explicit expressions for the expected forgetting and overall generalization error to characterize the benefit of MoE in the learning performance in CL. Interestingly, adding more experts requires additional rounds before convergence, which may not enhance the learning performance. Finally, we conduct experiments on both synthetic and real datasets to extend these insights from linear models to deep neural networks (DNNs), which also shed light on the practical algorithm design for MoE in CL.",True,True,"Li, Hongbo and Lin, Sen and Duan, Lingjie and Liang, Yingbin and Shroff, Ness B",2024.0,,,,arXiv preprint arXiv:2406.16437,Theory on Mixture-of-Experts in Continual Learning,Theory on Mixture-of-Experts in Continual Learning,https://openreview.net/forum?id=7XgKAabsPp,"by H Li · Cited by 24 — This paper provides a theoretical study of Mixture-of-Experts (MoE) models for Continual Learning (CL). Specifically, it examines the CL of linear regression" "Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective",2506.00205v1,banayeeanzade2024theoretical,\cite{banayeeanzade2024theoretical},"Theoretical Insights into Overparameterized Models in Multi-Task and Replay-Based Continual Learning",http://arxiv.org/abs/2408.16939v2,"Multi-task learning (MTL) is a machine learning paradigm that aims to improve the generalization performance of a model on multiple related tasks by training it simultaneously on those tasks. Unlike MTL, where the model has instant access to the training data of all tasks, continual learning (CL) involves adapting to new sequentially arriving tasks over time without forgetting the previously acquired knowledge. Despite the wide practical adoption of CL and MTL and extensive literature on both areas, there remains a gap in the theoretical understanding of these methods when used with overparameterized models such as deep neural networks. This paper studies the overparameterized linear models as a proxy for more complex models. We develop theoretical results describing the effect of various system parameters on the model's performance in an MTL setup. Specifically, we study the impact of model size, dataset size, and task similarity on the generalization error and knowledge transfer. Additionally, we present theoretical results to characterize the performance of replay-based CL models. Our results reveal the impact of buffer size and model capacity on the forgetting rate in a CL setup and help shed light on some of the state-of-the-art CL methods. Finally, through extensive empirical evaluations, we demonstrate that our theoretical findings are also applicable to deep neural networks, offering valuable guidance for designing MTL and CL models in practice.",True,True,"Banayeeanzade, Mohammadamin and Soltanolkotabi, Mahdi and Rostami, Mohammad",2024.0,,,,arXiv preprint arXiv:2408.16939,"Theoretical Insights into Overparameterized Models in Multi-Task and Replay-Based Continual Learning",Theoretical Insights into Overparameterized Models in Multi-Task...,https://openreview.net/forum?id=4zGPT0ZwnU,"The paper provides theoretical insights into multi-task learning (MTL) and replay-based continual learning (CL) in overparameterized settings, using linear" "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,Informer,\cite{Informer},"Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting",http://arxiv.org/abs/2012.07436v3,"Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.",True,True,"Haoyi Zhou and Shanghang Zhang and Jieqi Peng and Shuai Zhang and Jianxin Li and Hui Xiong and Wancai Zhang",2021.0,,,10.1609/AAAI.V35I12.17325,,"Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting",zhouhaoyi/Informer2020: The GitHub repository for the paper ...,https://github.com/zhouhaoyi/Informer2020,This is the origin Pytorch implementation of Informer in the following paper: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,Pyraformer,\cite{Pyraformer},"Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting",,,True,False,"Shizhan Liu and Hang Yu and Cong Liao and Jianguo Li and Weiyao Lin and Alex X. Liu and Schahram Dustdar",2022.0,,https://openreview.net/forum?id=0EXmFzUn5I,,,"Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting",Pyraformer: Low-Complexity Pyramidal Attention for Long-Range ...,https://openreview.net/forum?id=0EXmFzUn5I,We propose a multiresolution pyramidal attention mechanism for long-range dependence modeling and time series forecasting. "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,LogSparse,\cite{LogSparse},"Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting",http://arxiv.org/abs/1907.00235v3,"Time series forecasting is an important problem across many domains, including predictions of solar plant energy output, electricity consumption, and traffic jam situation. In this paper, we propose to tackle such forecasting problem with Transformer [1]. Although impressed by its performance in our preliminary study, we found its two major weaknesses: (1) locality-agnostics: the point-wise dot-product self-attention in canonical Transformer architecture is insensitive to local context, which can make the model prone to anomalies in time series; (2) memory bottleneck: space complexity of canonical Transformer grows quadratically with sequence length $L$, making directly modeling long time series infeasible. In order to solve these two issues, we first propose convolutional self-attention by producing queries and keys with causal convolution so that local context can be better incorporated into attention mechanism. Then, we propose LogSparse Transformer with only $O(L(\log L)^{2})$ memory cost, improving forecasting accuracy for time series with fine granularity and strong long-term dependencies under constrained memory budget. Our experiments on both synthetic data and real-world datasets show that it compares favorably to the state-of-the-art.",True,True,"Shiyang Li and Xiaoyong Jin and Yao Xuan and Xiyou Zhou and Wenhu Chen and Yu{-}Xiang Wang and Xifeng Yan",2019.0,,https://proceedings.neurips.cc/paper/2019/hash/6775a0635c302542da2c32aa19d86be0-Abstract.html,,,"Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting",Enhancing the Locality and Breaking the Memory ...,http://papers.neurips.cc/paper/8766-enhancing-the-locality-and-breaking-the-memory-bottleneck-of-transformer-on-time-series-forecasting.pdf,by S Li · Cited by 2226 — We successfully apply Transformer architecture to time series forecasting and perform extensive experiments on both synthetic and real datasets to validate "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,PatchTST,\cite{PatchTST},A Time Series is Worth 64 Words: Long-term Forecasting with Transformers,http://arxiv.org/abs/2211.14730v2,"We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy. Code is available at: https://github.com/yuqinie98/PatchTST.",True,True,"Yuqi Nie and Nam H. Nguyen and Phanwadee Sinthong and Jayant Kalagnanam",2023.0,,https://openreview.net/forum?id=Jbdc0vTOcol,,,A Time Series is Worth 64 Words: Long-term Forecasting with Transformers,PatchTST (ICLR 2023) - GitHub,https://github.com/yuqinie98/PatchTST,This is an offical implementation of PatchTST: A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. Our model has been included in "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,Crossformer,\cite{Crossformer},"Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting",,,True,False,"Yunhao Zhang and Junchi Yan",2023.0,,https://openreview.net/forum?id=vSVLM2j9eie,,,"Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting",Crossformer: Transformer Utilizing Cross-Dimension ...,https://openreview.net/forum?id=vSVLM2j9eie,"by Y Zhang · Cited by 1238 — We propose Crossformer, a Transformer-based model that explicitly utilizes cross-dimension dependency for multivariate time series forecasting." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,Autoformer,\cite{Autoformer},"Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting",http://arxiv.org/abs/2106.13008v5,"Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease. Code is available at this repository: \url{https://github.com/thuml/Autoformer}.",True,True,"Haixu Wu and Jiehui Xu and Jianmin Wang and Mingsheng Long",2021.0,,https://proceedings.neurips.cc/paper/2021/hash/bcc0d400288793e8bdcd7c19a8ac0c2b-Abstract.html,,,"Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting",[PDF] Decomposition Transformers with Auto-Correlation for Long-Term ...,https://ise.thss.tsinghua.edu.cn/~mlong/doc/Autoformer-nips21.pdf,"Autoformer achieves a 38% relative improvement under the long-term setting on six bench- marks, covering five real-world applications: energy, traffic," "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,FEDformer,\cite{FEDformer},"FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting",http://arxiv.org/abs/2201.12740v3,"Although Transformer-based methods have significantly improved state-of-the-art results for long-term series forecasting, they are not only computationally expensive but more importantly, are unable to capture the global view of time series (e.g. overall trend). To address these problems, we propose to combine Transformer with the seasonal-trend decomposition method, in which the decomposition method captures the global profile of time series while Transformers capture more detailed structures. To further enhance the performance of Transformer for long-term prediction, we exploit the fact that most time series tend to have a sparse representation in well-known basis such as Fourier transform, and develop a frequency enhanced Transformer. Besides being more effective, the proposed method, termed as Frequency Enhanced Decomposed Transformer ({\bf FEDformer}), is more efficient than standard Transformer with a linear complexity to the sequence length. Our empirical studies with six benchmark datasets show that compared with state-of-the-art methods, FEDformer can reduce prediction error by $14.8\%$ and $22.6\%$ for multivariate and univariate time series, respectively. Code is publicly available at https://github.com/MAZiqing/FEDformer.",True,True,"Tian Zhou and Ziqing Ma and Qingsong Wen and Xue Wang and Liang Sun and Rong Jin",2022.0,,https://proceedings.mlr.press/v162/zhou22g.html,,,"FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting",MAZiqing/FEDformer,https://github.com/MAZiqing/FEDformer,Frequency Enhanced Decomposed Transformer (FEDformer) is more efficient than standard Transformer with a linear complexity to the sequence length.See more "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,MICN,\cite{MICN},"{MICN:} Multi-scale Local and Global Context Modeling for Long-term Series Forecasting",,,True,False,"Huiqiang Wang and Jian Peng and Feihu Huang and Jince Wang and Junhui Chen and Yifei Xiao",2023.0,,https://openreview.net/forum?id=zt53IDUR1U,,,"{MICN:} Multi-scale Local and Global Context Modeling for Long-term Series Forecasting",Modeling Temporal Symmetry: Dual-Component Framework for ...,https://www.mdpi.com/2073-8994/17/4/577,Micn: Multi-scale local and global context modeling for long-term series forecasting. In Proceedings of the Eleventh International Conference on Learning "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,TimesNet,\cite{TimesNet},"TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis",http://arxiv.org/abs/2210.02186v3,"Time series analysis is of immense importance in extensive applications, such as weather forecasting, anomaly detection, and action recognition. This paper focuses on temporal variation modeling, which is the common key problem of extensive analysis tasks. Previous methods attempt to accomplish this directly from the 1D time series, which is extremely challenging due to the intricate temporal patterns. Based on the observation of multi-periodicity in time series, we ravel out the complex temporal variations into the multiple intraperiod- and interperiod-variations. To tackle the limitations of 1D time series in representation capability, we extend the analysis of temporal variations into the 2D space by transforming the 1D time series into a set of 2D tensors based on multiple periods. This transformation can embed the intraperiod- and interperiod-variations into the columns and rows of the 2D tensors respectively, making the 2D-variations to be easily modeled by 2D kernels. Technically, we propose the TimesNet with TimesBlock as a task-general backbone for time series analysis. TimesBlock can discover the multi-periodicity adaptively and extract the complex temporal variations from transformed 2D tensors by a parameter-efficient inception block. Our proposed TimesNet achieves consistent state-of-the-art in five mainstream time series analysis tasks, including short- and long-term forecasting, imputation, classification, and anomaly detection. Code is available at this repository: https://github.com/thuml/TimesNet.",True,True,"Haixu Wu and Tengge Hu and Yong Liu and Hang Zhou and Jianmin Wang and Mingsheng Long",2023.0,,https://openreview.net/forum?id=ju\_Uqw384Oq,,,"TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis",The complete code and scripts of TimesNet ...,https://github.com/thuml/TimesNet,"GitHub - thuml/TimesNet: About Code release for ""TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis"" (ICLR 2023), https://openreview.net/pdf?id=ju_Uqw384Oq * GitHub Advanced Security Enterprise-grade security features About Code release for ""TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis"" (ICLR 2023), https://openreview.net/pdf?id=ju_Uqw384Oq TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis [ICLR 2023] In this paper, we present TimesNet as a powerful foundation model for general time series analysis, which can Benefiting from 2D kernel design, TimesNet (marked by red stars) can learn appropriate representations for different tasks, demonstrating its task generality as a foundation model. title={TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis}, About Code release for ""TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis"" (ICLR 2023), https://openreview.net/pdf?id=ju_Uqw384Oq" "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,DLinear,\cite{DLinear},Are Transformers Effective for Time Series Forecasting?,http://arxiv.org/abs/2205.13504v3,"Recently, there has been a surge of Transformer-based solutions for the long-term time series forecasting (LTSF) task. Despite the growing performance over the past few years, we question the validity of this line of research in this work. Specifically, Transformers is arguably the most successful solution to extract the semantic correlations among the elements in a long sequence. However, in time series modeling, we are to extract the temporal relations in an ordered set of continuous points. While employing positional encoding and using tokens to embed sub-series in Transformers facilitate preserving some ordering information, the nature of the \emph{permutation-invariant} self-attention mechanism inevitably results in temporal information loss. To validate our claim, we introduce a set of embarrassingly simple one-layer linear models named LTSF-Linear for comparison. Experimental results on nine real-life datasets show that LTSF-Linear surprisingly outperforms existing sophisticated Transformer-based LTSF models in all cases, and often by a large margin. Moreover, we conduct comprehensive empirical studies to explore the impacts of various design elements of LTSF models on their temporal relation extraction capability. We hope this surprising finding opens up new research directions for the LTSF task. We also advocate revisiting the validity of Transformer-based solutions for other time series analysis tasks (e.g., anomaly detection) in the future. Code is available at: \url{https://github.com/cure-lab/LTSF-Linear}.",True,True,"Ailing Zeng and Muxi Chen and Lei Zhang and Qiang Xu",2023.0,,,10.1609/AAAI.V37I9.26317,,Are Transformers Effective for Time Series Forecasting?,Are Transformers Effective for Time Series Forecasting?,http://arxiv.org/pdf/2205.13504v3,"Recently, there has been a surge of Transformer-based solutions for the long-term time series forecasting (LTSF) task. Despite the growing performance over the past few years, we question the validity of this line of research in this work. Specifically, Transformers is arguably the most successful solution to extract the semantic correlations among the elements in a long sequence. However, in time series modeling, we are to extract the temporal relations in an ordered set of continuous points. While employing positional encoding and using tokens to embed sub-series in Transformers facilitate preserving some ordering information, the nature of the \emph{permutation-invariant} self-attention mechanism inevitably results in temporal information loss. To validate our claim, we introduce a set of embarrassingly simple one-layer linear models named LTSF-Linear for comparison. Experimental results on nine real-life datasets show that LTSF-Linear surprisingly outperforms existing sophisticated Transformer-based LTSF models in all cases, and often by a large margin. Moreover, we conduct comprehensive empirical studies to explore the impacts of various design elements of LTSF models on their temporal relation extraction capability. We hope this surprising finding opens up new research directions for the LTSF task. We also advocate revisiting the validity of Transformer-based solutions for other time series analysis tasks (e.g., anomaly detection) in the future. Code is available at: \url{https://github.com/cure-lab/LTSF-Linear}." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,TSMixer,\cite{TSMixer},"TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting",http://arxiv.org/abs/2306.09364v4,"Transformers have gained popularity in time series forecasting for their ability to capture long-sequence interactions. However, their high memory and computing requirements pose a critical bottleneck for long-term forecasting. To address this, we propose TSMixer, a lightweight neural architecture exclusively composed of multi-layer perceptron (MLP) modules for multivariate forecasting and representation learning on patched time series. Inspired by MLP-Mixer's success in computer vision, we adapt it for time series, addressing challenges and introducing validated components for enhanced accuracy. This includes a novel design paradigm of attaching online reconciliation heads to the MLP-Mixer backbone, for explicitly modeling the time-series properties such as hierarchy and channel-correlations. We also propose a novel Hybrid channel modeling and infusion of a simple gating approach to effectively handle noisy channel interactions and generalization across diverse datasets. By incorporating these lightweight components, we significantly enhance the learning capability of simple MLP structures, outperforming complex Transformer models with minimal computing usage. Moreover, TSMixer's modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a promising building block for time-series Foundation Models. TSMixer outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X). The source code of our model is officially released as PatchTSMixer in the HuggingFace. Model: https://huggingface.co/docs/transformers/main/en/model_doc/patchtsmixer Examples: https://github.com/ibm/tsfm/#notebooks-links",True,True,"Vijay Ekambaram and Arindam Jati and Nam Nguyen and Phanwadee Sinthong and Jayant Kalagnanam",2023.0,,,10.1145/3580305.3599533,,"TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting",Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting,https://dl.acm.org/doi/10.1145/3580305.3599533,"TSMixer is designed for multivariate forecasting and representation learning on patched time series, providing an efficient alternative to Transformers." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,DeepAR,\cite{DeepAR},DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks,http://arxiv.org/abs/1704.04110v3,"Probabilistic forecasting, i.e. estimating the probability distribution of a time series' future given its past, is a key enabler for optimizing business processes. In retail businesses, for example, forecasting demand is crucial for having the right inventory available at the right time at the right place. In this paper we propose DeepAR, a methodology for producing accurate probabilistic forecasts, based on training an auto regressive recurrent network model on a large number of related time series. We demonstrate how by applying deep learning techniques to forecasting, one can overcome many of the challenges faced by widely-used classical approaches to the problem. We show through extensive empirical evaluation on several real-world forecasting data sets accuracy improvements of around 15% compared to state-of-the-art methods.",True,True,"David Salinas and Valentin Flunkert and Jan Gasthaus and Tim Januschowski",2020.0,,,https://doi.org/10.1016/j.ijforecast.2019.07.001,International Journal of Forecasting,DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks,DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks,http://arxiv.org/pdf/1704.04110v3,"Probabilistic forecasting, i.e. estimating the probability distribution of a time series' future given its past, is a key enabler for optimizing business processes. In retail businesses, for example, forecasting demand is crucial for having the right inventory available at the right time at the right place. In this paper we propose DeepAR, a methodology for producing accurate probabilistic forecasts, based on training an auto regressive recurrent network model on a large number of related time series. We demonstrate how by applying deep learning techniques to forecasting, one can overcome many of the challenges faced by widely-used classical approaches to the problem. We show through extensive empirical evaluation on several real-world forecasting data sets accuracy improvements of around 15% compared to state-of-the-art methods." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,D3VAE,\cite{D3VAE},"Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement",,,True,False,"Yan Li and Xinjiang Lu and Yaqing Wang and Dejing Dou",2022.0,,http://papers.nips.cc/paper\_files/paper/2022/hash/91a85f3fb8f570e6be52b333b5ab017a-Abstract-Conference.html,,,"Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement","Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement",http://arxiv.org/pdf/2301.03028v1,"Time series forecasting has been a widely explored task of great importance in many applications. However, it is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series. In this work, we propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder (BVAE) equipped with diffusion, denoise, and disentanglement, namely D3VAE. Specifically, a coupled diffusion probabilistic model is proposed to augment the time series data without increasing the aleatoric uncertainty and implement a more tractable inference process with BVAE. To ensure the generated series move toward the true target, we further propose to adapt and integrate the multiscale denoising score matching into the diffusion process for time series forecasting. In addition, to enhance the interpretability and stability of the prediction, we treat the latent variable in a multivariate manner and disentangle them on top of minimizing total correlation. Extensive experiments on synthetic and real-world data show that D3VAE outperforms competitive algorithms with remarkable margins. Our implementation is available at https://github.com/PaddlePaddle/PaddleSpatial/tree/main/research/D3VAE." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,CF-RNN,\cite{CF-RNN},Conformal Time-series Forecasting,,,True,False,"Kamile Stankeviciute and Ahmed M. Alaa and Mihaela van der Schaar",2021.0,,https://proceedings.neurips.cc/paper/2021/hash/312f1ba2a72318edaaa995a67835fad5-Abstract.html,,,Conformal Time-series Forecasting,Conformal Time-Series Forecasting,https://proceedings.neurips.cc/paper/2021/file/312f1ba2a72318edaaa995a67835fad5-Paper.pdf,"by K Stankeviciute · 2021 · Cited by 189 — In this work, we extend the inductive conformal prediction framework to the time-series forecasting setup, and propose a lightweight uncertainty estimation" "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,EnbPI,\cite{EnbPI},Conformal prediction interval for dynamic time-series,,,True,False,"Chen Xu and Yao Xie",2021.0,,http://proceedings.mlr.press/v139/xu21h.html,,,Conformal prediction interval for dynamic time-series,[PDF] Conformal Prediction Interval for Dynamic Time-Series,https://proceedings.mlr.press/v139/xu21h/xu21h.pdf,"Abstract. We develop a method to construct distribution- free prediction intervals for dynamic time-series, called EnbPI that wraps around any bootstrap." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,EnbPI2,\cite{EnbPI2},Conformal prediction for time series,http://arxiv.org/abs/2010.09107v15,"We develop a general framework for constructing distribution-free prediction intervals for time series. Theoretically, we establish explicit bounds on conditional and marginal coverage gaps of estimated prediction intervals, which asymptotically converge to zero under additional assumptions. We obtain similar bounds on the size of set differences between oracle and estimated prediction intervals. Methodologically, we introduce a computationally efficient algorithm called \texttt{EnbPI} that wraps around ensemble predictors, which is closely related to conformal prediction (CP) but does not require data exchangeability. \texttt{EnbPI} avoids data-splitting and is computationally efficient by avoiding retraining and thus scalable to sequentially producing prediction intervals. We perform extensive simulation and real-data analyses to demonstrate its effectiveness compared with existing methods. We also discuss the extension of \texttt{EnbPI} on various other applications.",True,True,"Chen Xu and Yao Xie",2023.0,,,10.1109/TPAMI.2023.3272339,{IEEE} Trans. Pattern Anal. Mach. Intell.,Conformal prediction for time series,Conformal prediction for time series,http://arxiv.org/pdf/2010.09107v15,"We develop a general framework for constructing distribution-free prediction intervals for time series. Theoretically, we establish explicit bounds on conditional and marginal coverage gaps of estimated prediction intervals, which asymptotically converge to zero under additional assumptions. We obtain similar bounds on the size of set differences between oracle and estimated prediction intervals. Methodologically, we introduce a computationally efficient algorithm called \texttt{EnbPI} that wraps around ensemble predictors, which is closely related to conformal prediction (CP) but does not require data exchangeability. \texttt{EnbPI} avoids data-splitting and is computationally efficient by avoiding retraining and thus scalable to sequentially producing prediction intervals. We perform extensive simulation and real-data analyses to demonstrate its effectiveness compared with existing methods. We also discuss the extension of \texttt{EnbPI} on various other applications." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,Prescriptive,\cite{Prescriptive},From Predictive to Prescriptive Analytics,http://arxiv.org/abs/1402.5481v4,"In this paper, we combine ideas from machine learning (ML) and operations research and management science (OR/MS) in developing a framework, along with specific methods, for using data to prescribe optimal decisions in OR/MS problems. In a departure from other work on data-driven optimization and reflecting our practical experience with the data available in applications of OR/MS, we consider data consisting, not only of observations of quantities with direct effect on costs/revenues, such as demand or returns, but predominantly of observations of associated auxiliary quantities. The main problem of interest is a conditional stochastic optimization problem, given imperfect observations, where the joint probability distributions that specify the problem are unknown. We demonstrate that our proposed solution methods, which are inspired by ML methods such as local regression, CART, and random forests, are generally applicable to a wide range of decision problems. We prove that they are tractable and asymptotically optimal even when data is not iid and may be censored. We extend this to the case where decision variables may directly affect uncertainty in unknown ways, such as pricing's effect on demand. As an analogue to R^2, we develop a metric P termed the coefficient of prescriptiveness to measure the prescriptive content of data and the efficacy of a policy from an operations perspective. To demonstrate the power of our approach in a real-world setting we study an inventory management problem faced by the distribution arm of an international media conglomerate, which ships an average of 1bil units per year. We leverage internal data and public online data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational decisions that outperform baseline measures. Specifically, the data we collect, leveraged by our methods, accounts for an 88\% improvement as measured by our P.",True,True,"Dimitris Bertsimas and Nathan Kallus",2020.0,,,10.1287/MNSC.2018.3253,Manag. Sci.,From Predictive to Prescriptive Analytics,From Predictive to Prescriptive Analytics,http://arxiv.org/pdf/1402.5481v4,"In this paper, we combine ideas from machine learning (ML) and operations research and management science (OR/MS) in developing a framework, along with specific methods, for using data to prescribe optimal decisions in OR/MS problems. In a departure from other work on data-driven optimization and reflecting our practical experience with the data available in applications of OR/MS, we consider data consisting, not only of observations of quantities with direct effect on costs/revenues, such as demand or returns, but predominantly of observations of associated auxiliary quantities. The main problem of interest is a conditional stochastic optimization problem, given imperfect observations, where the joint probability distributions that specify the problem are unknown. We demonstrate that our proposed solution methods, which are inspired by ML methods such as local regression, CART, and random forests, are generally applicable to a wide range of decision problems. We prove that they are tractable and asymptotically optimal even when data is not iid and may be censored. We extend this to the case where decision variables may directly affect uncertainty in unknown ways, such as pricing's effect on demand. As an analogue to R^2, we develop a metric P termed the coefficient of prescriptiveness to measure the prescriptive content of data and the efficacy of a policy from an operations perspective. To demonstrate the power of our approach in a real-world setting we study an inventory management problem faced by the distribution arm of an international media conglomerate, which ships an average of 1bil units per year. We leverage internal data and public online data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational decisions that outperform baseline measures. Specifically, the data we collect, leveraged by our methods, accounts for an 88\% improvement as measured by our P." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,PtO-bound,\cite{PtO-bound},Generalization Bounds in the Predict-then-Optimize Framework,http://arxiv.org/abs/1905.11488v3,"The predict-then-optimize framework is fundamental in many practical settings: predict the unknown parameters of an optimization problem, and then solve the problem using the predicted values of the parameters. A natural loss function in this environment is to consider the cost of the decisions induced by the predicted parameters, in contrast to the prediction error of the parameters. This loss function was recently introduced in Elmachtoub and Grigas (2022) and referred to as the Smart Predict-then-Optimize (SPO) loss. In this work, we seek to provide bounds on how well the performance of a prediction model fit on training data generalizes out-of-sample, in the context of the SPO loss. Since the SPO loss is non-convex and non-Lipschitz, standard results for deriving generalization bounds do not apply. We first derive bounds based on the Natarajan dimension that, in the case of a polyhedral feasible region, scale at most logarithmically in the number of extreme points, but, in the case of a general convex feasible region, have linear dependence on the decision dimension. By exploiting the structure of the SPO loss function and a key property of the feasible region, which we denote as the strength property, we can dramatically improve the dependence on the decision and feature dimensions. Our approach and analysis rely on placing a margin around problematic predictions that do not yield unique optimal solutions, and then providing generalization bounds in the context of a modified margin SPO loss function that is Lipschitz continuous. Finally, we characterize the strength property and show that the modified SPO loss can be computed efficiently for both strongly convex bodies and polytopes with an explicit extreme point representation.",True,True,"Othman El Balghiti and Adam N. Elmachtoub and Paul Grigas and Ambuj Tewari",2019.0,,https://proceedings.neurips.cc/paper/2019/hash/a70145bf8b173e4496b554ce57969e24-Abstract.html,,,Generalization Bounds in the Predict-then-Optimize Framework,[PDF] Generalization Bounds in the Predict-then-Optimize Framework,https://www.ambujtewari.com/research/elbalghiti22generalization.pdf,"The predict-then-optimize framework is fundamental in many practical settings: predict the unknown param- eters of an optimization problem, and then solve" "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,PTOCA,\cite{PTOCA},A Predict-Then-Optimize Couriers Allocation Framework for Emergency Last-mile Logistics,,,True,False,"Kaiwen Xia and Li Lin and Shuai Wang and Haotian Wang and Desheng Zhang and Tian He",2023.0,,,10.1145/3580305.3599766,,A Predict-Then-Optimize Couriers Allocation Framework for Emergency Last-mile Logistics,A Predict-Then-Optimize Couriers Allocation ... - BibSonomy,https://www.bibsonomy.org/bibtex/1340e4e6e70a4e7eaefc9a9d0241a9e82,"A Predict-Then-Optimize Couriers Allocation Framework for Emergency Last-mile Logistics. K. Xia, L. Lin, S. Wang, H. Wang, D. Zhang, and T. He." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,PTOFA,\cite{PTOFA},"A Predict-Then-Optimize Customer Allocation Framework for Online Fund Recommendation",http://arxiv.org/abs/2503.03165v1,"With the rapid growth of online investment platforms, funds can be distributed to individual customers online. The central issue is to match funds with potential customers under constraints. Most mainstream platforms adopt the recommendation formulation to tackle the problem. However, the traditional recommendation regime has its inherent drawbacks when applying the fund-matching problem with multiple constraints. In this paper, we model the fund matching under the allocation formulation. We design PTOFA, a Predict-Then-Optimize Fund Allocation framework. This data-driven framework consists of two stages, i.e., prediction and optimization, which aim to predict expected revenue based on customer behavior and optimize the impression allocation to achieve the maximum revenue under the necessary constraints, respectively. Extensive experiments on real-world datasets from an industrial online investment platform validate the effectiveness and efficiency of our solution. Additionally, the online A/B tests demonstrate PTOFA's effectiveness in the real-world fund recommendation scenario.",True,True,"Tang, Xing and Weng, Yunpeng and Lyu, Fuyuan and Liu, Dugang and He, Xiuqiang",2025.0,,,,arXiv preprint arXiv:2503.03165,"A Predict-Then-Optimize Customer Allocation Framework for Online Fund Recommendation",[Literature Review] A Predict-Then-Optimize Customer ...,https://www.themoonlight.io/en/review/a-predict-then-optimize-customer-allocation-framework-for-online-fund-recommendation,The paper presents a novel Predict-Then-Optimize Fund Allocation (PTOFA) framework designed to address the challenges of matching funds with "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,PTO-PNO-Benchmark,\cite{PTO-PNO-Benchmark},"Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime",http://arxiv.org/abs/2311.07633v5,"Predictive combinatorial optimization, where the parameters of combinatorial optimization (CO) are unknown at the decision-making time, is the precise modeling of many real-world applications, including energy cost-aware scheduling and budget allocation on advertising. Tackling such a problem usually involves a prediction model and a CO solver. These two modules are integrated into the predictive CO pipeline following two design principles: ""Predict-then-Optimize (PtO)"", which learns predictions by supervised training and subsequently solves CO using predicted coefficients, while the other, named ""Predict-and-Optimize (PnO)"", directly optimizes towards the ultimate decision quality and claims to yield better decisions than traditional PtO approaches. However, there lacks a systematic benchmark of both approaches, including the specific design choices at the module level, as well as an evaluation dataset that covers representative real-world scenarios. To this end, we develop a modular framework to benchmark 11 existing PtO/PnO methods on 8 problems, including a new industrial dataset for combinatorial advertising that will be released. Our study shows that PnO approaches are better than PtO on 7 out of 8 benchmarks, but there is no silver bullet found for the specific design choices of PnO. A comprehensive categorization of current approaches and integration of typical scenarios are provided under a unified benchmark. Therefore, this paper could serve as a comprehensive benchmark for future PnO approach development and also offer fast prototyping for application-focused development. The code is available at https://github.com/Thinklab-SJTU/PredictiveCO-Benchmark.",True,True,"Geng, Haoyu and Ruan, Hang and Wang, Runzhong and Li, Yang and Wang, Yang and Chen, Lei and Yan, Junchi",2024.0,,https://openreview.net/forum?id=cX57Pbw8vS,,,"Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime",Benchmarking PtO and PnO Methods in the Predictive ...,https://arxiv.org/html/2311.07633v5,Predictive combinatorial optimization is a family of Combinatorial Optimization (CO) problems where the problem parameters are unknown during decision-making. "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,PtOorPnO,\cite{PtOorPnO},"Predict-then-optimize or predict-and-optimize? An empirical evaluation of cost-sensitive learning strategies",,,True,False,"Toon Vanderschueren and Tim Verdonck and Bart Baesens and Wouter Verbeke",2022.0,,,10.1016/J.INS.2022.02.021,Inf. Sci.,"Predict-then-optimize or predict-and-optimize? An empirical evaluation of cost-sensitive learning strategies",Predict-then-optimize or predict-and-optimize? An ...,https://repository.uantwerpen.be/link/irua/189575,"by T Vanderschueren · 2022 · Cited by 81 — Predict-then-optimize or predict-and-optimize? An empirical evaluation of cost-sensitive learning strategies. Author. Vanderschueren, Toon. Verdonck, Tim." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,PnO-bound,\cite{PnO-bound},Risk Bounds and Calibration for a Smart Predict-then-Optimize Method,http://arxiv.org/abs/2108.08887v2,"The predict-then-optimize framework is fundamental in practical stochastic decision-making problems: first predict unknown parameters of an optimization model, then solve the problem using the predicted values. A natural loss function in this setting is defined by measuring the decision error induced by the predicted parameters, which was named the Smart Predict-then-Optimize (SPO) loss by Elmachtoub and Grigas [arXiv:1710.08005]. Since the SPO loss is typically nonconvex and possibly discontinuous, Elmachtoub and Grigas [arXiv:1710.08005] introduced a convex surrogate, called the SPO+ loss, that importantly accounts for the underlying structure of the optimization model. In this paper, we greatly expand upon the consistency results for the SPO+ loss provided by Elmachtoub and Grigas [arXiv:1710.08005]. We develop risk bounds and uniform calibration results for the SPO+ loss relative to the SPO loss, which provide a quantitative way to transfer the excess surrogate risk to excess true risk. By combining our risk bounds with generalization bounds, we show that the empirical minimizer of the SPO+ loss achieves low excess true risk with high probability. We first demonstrate these results in the case when the feasible region of the underlying optimization problem is a polyhedron, and then we show that the results can be strengthened substantially when the feasible region is a level set of a strongly convex function. We perform experiments to empirically demonstrate the strength of the SPO+ surrogate, as compared to standard $\ell_1$ and squared $\ell_2$ prediction error losses, on portfolio allocation and cost-sensitive multi-class classification problems.",True,True,"Heyuan Liu and Paul Grigas",2021.0,,https://proceedings.neurips.cc/paper/2021/hash/b943325cc7b7422d2871b345bf9b067f-Abstract.html,,,Risk Bounds and Calibration for a Smart Predict-then-Optimize Method,[PDF] Risk Bounds and Calibration for a Smart Predict-then-Optimize Method,https://papers.neurips.cc/paper/2021/file/b943325cc7b7422d2871b345bf9b067f-Paper.pdf,The predict-then-optimize framework is fundamental in practical stochastic decision-making problems: first predict unknown parameters of an optimization. "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,DFL-Survey,\cite{DFL-Survey},"Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities",http://arxiv.org/abs/2307.13565v4,"Decision-focused learning (DFL) is an emerging paradigm that integrates machine learning (ML) and constrained optimization to enhance decision quality by training ML models in an end-to-end system. This approach shows significant potential to revolutionize combinatorial decision-making in real-world applications that operate under uncertainty, where estimating unknown parameters within decision models is a major challenge. This paper presents a comprehensive review of DFL, providing an in-depth analysis of both gradient-based and gradient-free techniques used to combine ML and constrained optimization. It evaluates the strengths and limitations of these techniques and includes an extensive empirical evaluation of eleven methods across seven problems. The survey also offers insights into recent advancements and future research directions in DFL. Code and benchmark: https://github.com/PredOpt/predopt-benchmarks",True,True,"Jayanta Mandi and James Kotary and Senne Berden and Maxime Mulamba and Victor Bucarey and Tias Guns and Ferdinando Fioretto",2024.0,,,10.1613/JAIR.1.15320,J. Artif. Intell. Res.,"Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities","View of Decision-Focused Learning: Foundations, State ...",https://jair.org/index.php/jair/article/view/15320/27076,"by J Mandi · 2024 · Cited by 107 — Return to Article Details Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities Download Download PDF. Thumbnails" "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,OptNet,\cite{OptNet},OptNet: Differentiable Optimization as a Layer in Neural Networks,http://arxiv.org/abs/1703.00443v5,"This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. We explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, the method is learns to play mini-Sudoku (4x4) given just input and output games, with no a-priori information about the rules of the game; this highlights the ability of OptNet to learn hard constraints better than other neural architectures.",True,True,"Brandon Amos and J. Zico Kolter",2017.0,,http://proceedings.mlr.press/v70/amos17a.html,,,OptNet: Differentiable Optimization as a Layer in Neural Networks,OptNet: Differentiable Optimization as a Layer in Neural Networks,http://arxiv.org/pdf/1703.00443v5,"This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. We explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, the method is learns to play mini-Sudoku (4x4) given just input and output games, with no a-priori information about the rules of the game; this highlights the ability of OptNet to learn hard constraints better than other neural architectures." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,Cvxpylayers,\cite{Cvxpylayers},Differentiable Convex Optimization Layers,http://arxiv.org/abs/1910.12430v1,"Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization. We introduce disciplined parametrized programming, a subset of disciplined convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solver's solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire convex program. We implement our methodology in version 1.1 of CVXPY, a popular Python-embedded DSL for convex optimization, and additionally implement differentiable layers for disciplined convex programs in PyTorch and TensorFlow 2.0. Our implementation significantly lowers the barrier to using convex optimization problems in differentiable programs. We present applications in linear machine learning models and in stochastic control, and we show that our layer is competitive (in execution time) compared to specialized differentiable solvers from past work.",True,True,"Akshay Agrawal and Brandon Amos and Shane T. Barratt and Stephen P. Boyd and Steven Diamond and J. Zico Kolter",2019.0,,https://proceedings.neurips.cc/paper/2019/hash/9ce3c52fc54362e22053399d3181c638-Abstract.html,,,Differentiable Convex Optimization Layers,Differentiable Convex Optimization Layers,http://arxiv.org/pdf/1910.12430v1,"Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization. We introduce disciplined parametrized programming, a subset of disciplined convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solver's solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire convex program. We implement our methodology in version 1.1 of CVXPY, a popular Python-embedded DSL for convex optimization, and additionally implement differentiable layers for disciplined convex programs in PyTorch and TensorFlow 2.0. Our implementation significantly lowers the barrier to using convex optimization problems in differentiable programs. We present applications in linear machine learning models and in stochastic control, and we show that our layer is competitive (in execution time) compared to specialized differentiable solvers from past work." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,NCE,\cite{NCE},Contrastive Losses and Solution Caching for Predict-and-Optimize,http://arxiv.org/abs/2011.05354v2,"Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. Second, we address a major bottleneck of all predict-and-optimize approaches, i.e. the need to frequently recompute optimal solutions at training time. This is done via a solver-agnostic solution caching scheme, and by replacing optimization calls with a lookup in the solution cache. The method is formally based on an inner approximation of the feasible space and, combined with a cache lookup strategy, provides a controllable trade-off between training time and accuracy of the loss approximation. We empirically show that even a very slow growth rate is enough to match the quality of state-of-the-art methods, at a fraction of the computational cost.",True,True,"Maxime Mulamba and Jayanta Mandi and Michelangelo Diligenti and Michele Lombardi and Victor Bucarey and Tias Guns",2021.0,,,10.24963/IJCAI.2021/390,,Contrastive Losses and Solution Caching for Predict-and-Optimize,[PDF] Contrastive Losses and Solution Caching for Predict-and-Optimize,https://people.cs.kuleuven.be/~tias.guns/files/ijcai21_nce_solpool.pdf,"In contrast, our con- trastive losses, coupled with a solution caching mechanism, do away with repeatedly solving the optimization problem." "Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting",2505.24835v1,SPO+,\cite{SPO+},"Smart ""Predict, then Optimize""",http://arxiv.org/abs/1710.08005v5,"Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is predict-then-optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in the downstream optimization problem. In contrast, we propose a new and very general framework, called Smart ""Predict, then Optimize"" (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing better prediction models. A key component of our framework is the SPO loss function which measures the decision error induced by a prediction. Training a prediction model with respect to the SPO loss is computationally challenging, and thus we derive, using duality theory, a convex surrogate loss function which we call the SPO+ loss. Most importantly, we prove that the SPO+ loss is statistically consistent with respect to the SPO loss under mild conditions. Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective. Numerical experiments on shortest path and portfolio optimization problems show that the SPO framework can lead to significant improvement under the predict-then-optimize paradigm, in particular when the prediction model being trained is misspecified. We find that linear models trained using SPO+ loss tend to dominate random forest algorithms, even when the ground truth is highly nonlinear.",True,True,"Adam N. Elmachtoub and Paul Grigas",2022.0,,,10.1287/MNSC.2020.3922,Manag. Sci.,"Smart ""Predict, then Optimize""","Smart ""Predict, then Optimize""",http://arxiv.org/pdf/1710.08005v5,"Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is predict-then-optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in the downstream optimization problem. In contrast, we propose a new and very general framework, called Smart ""Predict, then Optimize"" (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing better prediction models. A key component of our framework is the SPO loss function which measures the decision error induced by a prediction. Training a prediction model with respect to the SPO loss is computationally challenging, and thus we derive, using duality theory, a convex surrogate loss function which we call the SPO+ loss. Most importantly, we prove that the SPO+ loss is statistically consistent with respect to the SPO loss under mild conditions. Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective. Numerical experiments on shortest path and portfolio optimization problems show that the SPO framework can lead to significant improvement under the predict-then-optimize paradigm, in particular when the prediction model being trained is misspecified. We find that linear models trained using SPO+ loss tend to dominate random forest algorithms, even when the ground truth is highly nonlinear." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,jumper2021highly,\cite{jumper2021highly},Highly accurate protein structure prediction with AlphaFold,,,True,False,"Jumper, John and Evans, Richard and Pritzel, Alexander and Green, Tim and Figurnov, Michael and Ronneberger, Olaf and Tunyasuvunakool, Kathryn and Bates, Russ and {\v{Z}}{\'\i}dek, Augustin and Potapenko, Anna and others",2021.0,,,,Nature,Highly accurate protein structure prediction with AlphaFold,Highly accurate protein structure prediction with AlphaFold - Nature,https://www.nature.com/articles/s41586-021-03819-2,"Highly accurate protein structure prediction with AlphaFold | Nature We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)15.""), demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods. AlphaFold structures had a median backbone accuracy of 0.96 Å r.m.s.d.95 (Cα root-mean-square deviation at 95% residue coverage) (95% confidence interval=0.85–1.16 Å) whereas the next best performing method had a median backbone accuracy of 2.8 Å r.m.s.d.95 (95% confidence interval=2.7–4.0 Å) (measured on CASP domains; see Fig. 1a for backbone accuracy and Supplementary Fig. 14 for all-atom accuracy). J. Predicting the secondary structure of globular proteins using neural network models." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,noe2019boltzmann,\cite{noe2019boltzmann},"Boltzmann Generators -- Sampling Equilibrium States of Many-Body Systems with Deep Learning",http://arxiv.org/abs/1812.01729v2,"Computing equilibrium states in condensed-matter many-body systems, such as solvated proteins, is a long-standing challenge. Lacking methods for generating statistically independent equilibrium samples in ""one shot"", vast computational effort is invested for simulating these system in small steps, e.g., using Molecular Dynamics. Combining deep learning and statistical mechanics, we here develop Boltzmann Generators, that are shown to generate unbiased one-shot equilibrium samples of representative condensed matter systems and proteins. Boltzmann Generators use neural networks to learn a coordinate transformation of the complex configurational equilibrium distribution to a distribution that can be easily sampled. Accurate computation of free energy differences and discovery of new configurations are demonstrated, providing a statistical mechanics tool that can avoid rare events during sampling without prior knowledge of reaction coordinates.",True,True,"No{\'e}, Frank and Olsson, Simon and K{\""o}hler, Jonas and Wu, Hao",2019.0,,,,Science,"Boltzmann Generators -- Sampling Equilibrium States of Many-Body Systems with Deep Learning",(PDF) Boltzmann generators: Sampling equilibrium states ...,https://www.researchgate.net/publication/335645955_Boltzmann_generators_Sampling_equilibrium_states_of_many-body_systems_with_deep_learning,Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. September 2019; Science 365(6457):eaaw1147. DOI Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,arts2023two,\cite{arts2023two},Two for One: Diffusion Models and Force Fields for Coarse-Grained Molecular Dynamics,,,True,False,"Arts, Marloes and Satorras, Victor Garcia and Huang, Chin-Wei and Zuegner, Daniel and Federici, Marco and Clementi, Cecilia and No{\'e}, Frank and Pinsler, Robert and Berg, Rianne van den",2023.0,,,,arXiv preprint arXiv:2302.00600,Two for One: Diffusion Models and Force Fields for Coarse-Grained Molecular Dynamics,Two for One: Diffusion Models and Force Fields for Coarse-Grained ...,https://arxiv.org/abs/2302.00600,"In this work, we leverage connections between score-based generative models, force fields and molecular dynamics to learn a CG force field" Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,jing2023eigenfold,\cite{jing2023eigenfold},EigenFold: Generative Protein Structure Prediction with Diffusion Models,http://arxiv.org/abs/2304.02198v1,"Protein structure prediction has reached revolutionary levels of accuracy on single structures, yet distributional modeling paradigms are needed to capture the conformational ensembles and flexibility that underlie biological function. Towards this goal, we develop EigenFold, a diffusion generative modeling framework for sampling a distribution of structures from a given protein sequence. We define a diffusion process that models the structure as a system of harmonic oscillators and which naturally induces a cascading-resolution generative process along the eigenmodes of the system. On recent CAMEO targets, EigenFold achieves a median TMScore of 0.84, while providing a more comprehensive picture of model uncertainty via the ensemble of sampled structures relative to existing methods. We then assess EigenFold's ability to model and predict conformational heterogeneity for fold-switching proteins and ligand-induced conformational change. Code is available at https://github.com/bjing2016/EigenFold.",True,True,"Jing, Bowen and Erives, Ezra and Pao-Huang, Peter and Corso, Gabriele and Berger, Bonnie and Jaakkola, Tommi",2023.0,,,,arXiv preprint arXiv:2304.02198,EigenFold: Generative Protein Structure Prediction with Diffusion Models,EigenFold: Generative Protein Structure Prediction with Diffusion Models,http://arxiv.org/pdf/2304.02198v1,"Protein structure prediction has reached revolutionary levels of accuracy on single structures, yet distributional modeling paradigms are needed to capture the conformational ensembles and flexibility that underlie biological function. Towards this goal, we develop EigenFold, a diffusion generative modeling framework for sampling a distribution of structures from a given protein sequence. We define a diffusion process that models the structure as a system of harmonic oscillators and which naturally induces a cascading-resolution generative process along the eigenmodes of the system. On recent CAMEO targets, EigenFold achieves a median TMScore of 0.84, while providing a more comprehensive picture of model uncertainty via the ensemble of sampled structures relative to existing methods. We then assess EigenFold's ability to model and predict conformational heterogeneity for fold-switching proteins and ligand-induced conformational change. Code is available at https://github.com/bjing2016/EigenFold." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,lu2024str2str,\cite{lu2024str2str},"Str2Str: A Score-based Framework for Zero-shot Protein Conformation Sampling",http://arxiv.org/abs/2306.03117v3,"The dynamic nature of proteins is crucial for determining their biological functions and properties, for which Monte Carlo (MC) and molecular dynamics (MD) simulations stand as predominant tools to study such phenomena. By utilizing empirically derived force fields, MC or MD simulations explore the conformational space through numerically evolving the system via Markov chain or Newtonian mechanics. However, the high-energy barrier of the force fields can hamper the exploration of both methods by the rare event, resulting in inadequately sampled ensemble without exhaustive running. Existing learning-based approaches perform direct sampling yet heavily rely on target-specific simulation data for training, which suffers from high data acquisition cost and poor generalizability. Inspired by simulated annealing, we propose Str2Str, a novel structure-to-structure translation framework capable of zero-shot conformation sampling with roto-translation equivariant property. Our method leverages an amortized denoising score matching objective trained on general crystal structures and has no reliance on simulation data during both training and inference. Experimental results across several benchmarking protein systems demonstrate that Str2Str outperforms previous state-of-the-art generative structure prediction models and can be orders of magnitude faster compared to long MD simulations. Our open-source implementation is available at https://github.com/lujiarui/Str2Str",True,True,"Lu, Jiarui and Zhong, Bozitao and Zhang, Zuobai and Tang, Jian",2024.0,,,,,"Str2Str: A Score-based Framework for Zero-shot Protein Conformation Sampling","Codebase of the paper ""Str2Str: A Score-based Framework for Zero ...",https://github.com/lujiarui/Str2Str,Str2Str is a score-based framework (which means it can accommodate any diffusion/flow matching architecture) for protein conformation sampling in a zero-shot Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,zheng2024predicting,\cite{zheng2024predicting},"Towards Predicting Equilibrium Distributions for Molecular Systems with Deep Learning",http://arxiv.org/abs/2306.05445v1,"Advances in deep learning have greatly improved structure prediction of molecules. However, many macroscopic observations that are important for real-world applications are not functions of a single molecular structure, but rather determined from the equilibrium distribution of structures. Traditional methods for obtaining these distributions, such as molecular dynamics simulation, are computationally expensive and often intractable. In this paper, we introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems. Inspired by the annealing process in thermodynamics, DiG employs deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system, such as a chemical graph or a protein sequence. This framework enables efficient generation of diverse conformations and provides estimations of state densities. We demonstrate the performance of DiG on several molecular tasks, including protein conformation sampling, ligand structure sampling, catalyst-adsorbate sampling, and property-guided structure generation. DiG presents a significant advancement in methodology for statistically understanding molecular systems, opening up new research opportunities in molecular science.",True,True,"Zheng, Shuxin and He, Jiyan and Liu, Chang and Shi, Yu and Lu, Ziheng and Feng, Weitao and Ju, Fusong and Wang, Jiaxi and Zhu, Jianwei and Min, Yaosen and others",2024.0,,,,Nature Machine Intelligence,"Towards Predicting Equilibrium Distributions for Molecular Systems with Deep Learning",Towards Predicting Equilibrium Distributions for Molecular Systems ...,https://arxiv.org/abs/2306.05445,"In this paper, we introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of" Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,wang2024proteinconformationgenerationforceguided,\cite{wang2024proteinconformationgenerationforceguided},Protein Conformation Generation via Force-Guided SE(3) Diffusion Models,http://arxiv.org/abs/2403.14088v2,"The conformational landscape of proteins is crucial to understanding their functionality in complex biological processes. Traditional physics-based computational methods, such as molecular dynamics (MD) simulations, suffer from rare event sampling and long equilibration time problems, hindering their applications in general protein systems. Recently, deep generative modeling techniques, especially diffusion models, have been employed to generate novel protein conformations. However, existing score-based diffusion methods cannot properly incorporate important physical prior knowledge to guide the generation process, causing large deviations in the sampled protein conformations from the equilibrium distribution. In this paper, to overcome these limitations, we propose a force-guided SE(3) diffusion model, ConfDiff, for protein conformation generation. By incorporating a force-guided network with a mixture of data-based score models, ConfDiff can generate protein conformations with rich diversity while preserving high fidelity. Experiments on a variety of protein conformation prediction tasks, including 12 fast-folding proteins and the Bovine Pancreatic Trypsin Inhibitor (BPTI), demonstrate that our method surpasses the state-of-the-art method.",True,True,Yan Wang and Lihao Wang and Yuning Shen and Yiqun Wang and Huizhuo Yuan and Yue Wu and Quanquan Gu,2024.0,,https://arxiv.org/abs/2403.14088,,,Protein Conformation Generation via Force-Guided SE(3) Diffusion Models,Official Implemetation of ConfDiff (ICML'24) - Protein Conformation ...,https://github.com/bytedance/ConfDiff,A force-guided SE(3) diffusion model for protein conformation generation. ConfDiff can generate protein conformations with rich diversity while preserving high Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,jing2024alphafoldmeetsflowmatching,\cite{jing2024alphafoldmeetsflowmatching},AlphaFold Meets Flow Matching for Generating Protein Ensembles,http://arxiv.org/abs/2402.04845v2,"The biological functions of proteins often depend on dynamic structural ensembles. In this work, we develop a flow-based generative modeling approach for learning and sampling the conformational landscapes of proteins. We repurpose highly accurate single-state predictors such as AlphaFold and ESMFold and fine-tune them under a custom flow matching framework to obtain sequence-conditoned generative models of protein structure called AlphaFlow and ESMFlow. When trained and evaluated on the PDB, our method provides a superior combination of precision and diversity compared to AlphaFold with MSA subsampling. When further trained on ensembles from all-atom MD, our method accurately captures conformational flexibility, positional distributions, and higher-order ensemble observables for unseen proteins. Moreover, our method can diversify a static PDB structure with faster wall-clock convergence to certain equilibrium properties than replicate MD trajectories, demonstrating its potential as a proxy for expensive physics-based simulations. Code is available at https://github.com/bjing2016/alphaflow.",True,True,Bowen Jing and Bonnie Berger and Tommi Jaakkola,2024.0,,https://arxiv.org/abs/2402.04845,,,AlphaFold Meets Flow Matching for Generating Protein Ensembles,AlphaFold Meets Flow Matching for Generating Protein Ensembles,http://arxiv.org/pdf/2402.04845v2,"The biological functions of proteins often depend on dynamic structural ensembles. In this work, we develop a flow-based generative modeling approach for learning and sampling the conformational landscapes of proteins. We repurpose highly accurate single-state predictors such as AlphaFold and ESMFold and fine-tune them under a custom flow matching framework to obtain sequence-conditoned generative models of protein structure called AlphaFlow and ESMFlow. When trained and evaluated on the PDB, our method provides a superior combination of precision and diversity compared to AlphaFold with MSA subsampling. When further trained on ensembles from all-atom MD, our method accurately captures conformational flexibility, positional distributions, and higher-order ensemble observables for unseen proteins. Moreover, our method can diversify a static PDB structure with faster wall-clock convergence to certain equilibrium properties than replicate MD trajectories, demonstrating its potential as a proxy for expensive physics-based simulations. Code is available at https://github.com/bjing2016/alphaflow." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,lu2024structure,\cite{lu2024structure},Structure Language Models for Protein Conformation Generation,http://arxiv.org/abs/2410.18403v2,"Proteins adopt multiple structural conformations to perform their diverse biological functions, and understanding these conformations is crucial for advancing drug discovery. Traditional physics-based simulation methods often struggle with sampling equilibrium conformations and are computationally expensive. Recently, deep generative models have shown promise in generating protein conformations as a more efficient alternative. However, these methods predominantly rely on the diffusion process within a 3D geometric space, which typically centers around the vicinity of metastable states and is often inefficient in terms of runtime. In this paper, we introduce Structure Language Modeling (SLM) as a novel framework for efficient protein conformation generation. Specifically, the protein structures are first encoded into a compact latent space using a discrete variational auto-encoder, followed by conditional language modeling that effectively captures sequence-specific conformation distributions. This enables a more efficient and interpretable exploration of diverse ensemble modes compared to existing methods. Based on this general framework, we instantiate SLM with various popular LM architectures as well as proposing the ESMDiff, a novel BERT-like structure language model fine-tuned from ESM3 with masked diffusion. We verify our approach in various scenarios, including the equilibrium dynamics of BPTI, conformational change pairs, and intrinsically disordered proteins. SLM provides a highly efficient solution, offering a 20-100x speedup than existing methods in generating diverse conformations, shedding light on promising avenues for future research.",True,True,"Lu, Jiarui and Chen, Xiaoyin and Lu, Stephen Zhewen and Shi, Chence and Guo, Hongyu and Bengio, Yoshua and Tang, Jian",2024.0,,,,arXiv preprint arXiv:2410.18403,Structure Language Models for Protein Conformation Generation,Structure Language Models for Protein Conformation Generation,http://arxiv.org/pdf/2410.18403v2,"Proteins adopt multiple structural conformations to perform their diverse biological functions, and understanding these conformations is crucial for advancing drug discovery. Traditional physics-based simulation methods often struggle with sampling equilibrium conformations and are computationally expensive. Recently, deep generative models have shown promise in generating protein conformations as a more efficient alternative. However, these methods predominantly rely on the diffusion process within a 3D geometric space, which typically centers around the vicinity of metastable states and is often inefficient in terms of runtime. In this paper, we introduce Structure Language Modeling (SLM) as a novel framework for efficient protein conformation generation. Specifically, the protein structures are first encoded into a compact latent space using a discrete variational auto-encoder, followed by conditional language modeling that effectively captures sequence-specific conformation distributions. This enables a more efficient and interpretable exploration of diverse ensemble modes compared to existing methods. Based on this general framework, we instantiate SLM with various popular LM architectures as well as proposing the ESMDiff, a novel BERT-like structure language model fine-tuned from ESM3 with masked diffusion. We verify our approach in various scenarios, including the equilibrium dynamics of BPTI, conformational change pairs, and intrinsically disordered proteins. SLM provides a highly efficient solution, offering a 20-100x speedup than existing methods in generating diverse conformations, shedding light on promising avenues for future research." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,jing2024generative,\cite{jing2024generative},Generative Modeling of Molecular Dynamics Trajectories,http://arxiv.org/abs/2409.17808v1,"Molecular dynamics (MD) is a powerful technique for studying microscopic phenomena, but its computational cost has driven significant interest in the development of deep learning-based surrogate models. We introduce generative modeling of molecular trajectories as a paradigm for learning flexible multi-task surrogate models of MD from data. By conditioning on appropriately chosen frames of the trajectory, we show such generative models can be adapted to diverse tasks such as forward simulation, transition path sampling, and trajectory upsampling. By alternatively conditioning on part of the molecular system and inpainting the rest, we also demonstrate the first steps towards dynamics-conditioned molecular design. We validate the full set of these capabilities on tetrapeptide simulations and show that our model can produce reasonable ensembles of protein monomers. Altogether, our work illustrates how generative modeling can unlock value from MD data towards diverse downstream tasks that are not straightforward to address with existing methods or even MD itself. Code is available at https://github.com/bjing2016/mdgen.",True,True,"Jing, Bowen and St{\""a}rk, Hannes and Jaakkola, Tommi and Berger, Bonnie",2024.0,,,,arXiv preprint arXiv:2409.17808,Generative Modeling of Molecular Dynamics Trajectories,Generative Modeling of Molecular Dynamics Trajectories,http://arxiv.org/pdf/2409.17808v1,"Molecular dynamics (MD) is a powerful technique for studying microscopic phenomena, but its computational cost has driven significant interest in the development of deep learning-based surrogate models. We introduce generative modeling of molecular trajectories as a paradigm for learning flexible multi-task surrogate models of MD from data. By conditioning on appropriately chosen frames of the trajectory, we show such generative models can be adapted to diverse tasks such as forward simulation, transition path sampling, and trajectory upsampling. By alternatively conditioning on part of the molecular system and inpainting the rest, we also demonstrate the first steps towards dynamics-conditioned molecular design. We validate the full set of these capabilities on tetrapeptide simulations and show that our model can produce reasonable ensembles of protein monomers. Altogether, our work illustrates how generative modeling can unlock value from MD data towards diverse downstream tasks that are not straightforward to address with existing methods or even MD itself. Code is available at https://github.com/bjing2016/mdgen." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,kreutzer2018reliability,\cite{kreutzer2018reliability},"Reliability and Learnability of Human Bandit Feedback for Sequence-to-Sequence Reinforcement Learning",http://arxiv.org/abs/1805.10627v3,"We present a study on reinforcement learning (RL) from human bandit feedback for sequence-to-sequence learning, exemplified by the task of bandit neural machine translation (NMT). We investigate the reliability of human bandit feedback, and analyze the influence of reliability on the learnability of a reward estimator, and the effect of the quality of reward estimates on the overall RL task. Our analysis of cardinal (5-point ratings) and ordinal (pairwise preferences) feedback shows that their intra- and inter-annotator $\alpha$-agreement is comparable. Best reliability is obtained for standardized cardinal feedback, and cardinal feedback is also easiest to learn and generalize from. Finally, improvements of over 1 BLEU can be obtained by integrating a regression-based reward estimator trained on cardinal feedback for 800 translations into RL for NMT. This shows that RL is possible even from small amounts of fairly reliable human feedback, pointing to a great potential for applications at larger scale.",True,True,Julia Kreutzer and Joshua Uyheng and S. Riezler,2018.0,,,10.18653/v1/P18-1165,Annual Meeting of the Association for Computational Linguistics,"Reliability and Learnability of Human Bandit Feedback for Sequence-to-Sequence Reinforcement Learning",Reliability and Learnability of Human Bandit Feedback for ...,https://aclanthology.org/P18-1165/,Missing: 04/08/2025 Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,stiennon2020learning,\cite{stiennon2020learning},Learning to summarize from human feedback,http://arxiv.org/abs/2009.01325v3,"As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about -- summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.",True,True,Nisan Stiennon and Long Ouyang and Jeff Wu and Daniel M. Ziegler and Ryan J. Lowe and Chelsea Voss and Alec Radford and Dario Amodei and Paul Christiano,2020.0,,,,Neural Information Processing Systems,Learning to summarize from human feedback,Learning to summarize from human feedback,http://arxiv.org/pdf/2009.01325v3,"As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about -- summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,ouyang2022training,\cite{ouyang2022training},Training language models to follow instructions with human feedback,,,True,False,Long Ouyang and Jeff Wu and Xu Jiang and Diogo Almeida and Carroll L. Wainwright and Pamela Mishkin and Chong Zhang and Sandhini Agarwal and Katarina Slama and Alex Ray and John Schulman and Jacob Hilton and Fraser Kelton and Luke E. Miller and Maddie Simens and Amanda Askell and P. Welinder and P. Christiano and J. Leike and Ryan J. Lowe,2022.0,,,,Neural Information Processing Systems,Training language models to follow instructions with human feedback,Training language models to follow instructions with human feedback,http://arxiv.org/pdf/2203.02155v1,"Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,black2023training,\cite{black2023training},Training Diffusion Models with Reinforcement Learning,http://arxiv.org/abs/2305.13301v4,"Diffusion models are a class of flexible generative models trained with an approximation to the log-likelihood objective. However, most use cases of diffusion models are not concerned with likelihoods, but instead with downstream objectives such as human-perceived image quality or drug effectiveness. In this paper, we investigate reinforcement learning methods for directly optimizing diffusion models for such objectives. We describe how posing denoising as a multi-step decision-making problem enables a class of policy gradient algorithms, which we refer to as denoising diffusion policy optimization (DDPO), that are more effective than alternative reward-weighted likelihood approaches. Empirically, DDPO is able to adapt text-to-image diffusion models to objectives that are difficult to express via prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Finally, we show that DDPO can improve prompt-image alignment using feedback from a vision-language model without the need for additional data collection or human annotation. The project's website can be found at http://rl-diffusion.github.io .",True,True,"Black, Kevin and Janner, Michael and Du, Yilun and Kostrikov, Ilya and Levine, Sergey",2023.0,,,,arXiv preprint arXiv:2305.13301,Training Diffusion Models with Reinforcement Learning,Training Diffusion Models with Reinforcement Learning,http://arxiv.org/pdf/2305.13301v4,"Diffusion models are a class of flexible generative models trained with an approximation to the log-likelihood objective. However, most use cases of diffusion models are not concerned with likelihoods, but instead with downstream objectives such as human-perceived image quality or drug effectiveness. In this paper, we investigate reinforcement learning methods for directly optimizing diffusion models for such objectives. We describe how posing denoising as a multi-step decision-making problem enables a class of policy gradient algorithms, which we refer to as denoising diffusion policy optimization (DDPO), that are more effective than alternative reward-weighted likelihood approaches. Empirically, DDPO is able to adapt text-to-image diffusion models to objectives that are difficult to express via prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Finally, we show that DDPO can improve prompt-image alignment using feedback from a vision-language model without the need for additional data collection or human annotation. The project's website can be found at http://rl-diffusion.github.io ." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,fan2024reinforcement,\cite{fan2024reinforcement},Reinforcement learning for fine-tuning text-to-image diffusion models,,,True,False,"Fan, Ying and Watkins, Olivia and Du, Yuqing and Liu, Hao and Ryu, Moonkyung and Boutilier, Craig and Abbeel, Pieter and Ghavamzadeh, Mohammad and Lee, Kangwook and Lee, Kimin",2024.0,,,,Advances in Neural Information Processing Systems,Reinforcement learning for fine-tuning text-to-image diffusion models,DPOK: Reinforcement Learning for Fine-tuning Text-to-Image ... - arXiv,https://arxiv.org/abs/2305.16381,"We propose using online reinforcement learning (RL) to fine-tune text-to-image models. We focus on diffusion models, defining the fine-tuning task as an RL" Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,rafailov2024direct,\cite{rafailov2024direct},"Direct Preference Optimization: Your Language Model is Secretly a Reward Model",http://arxiv.org/abs/2305.18290v3,"While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality of model generations and fine-tune the unsupervised LM to align with these preferences, often with reinforcement learning from human feedback (RLHF). However, RLHF is a complex and often unstable procedure, first fitting a reward model that reflects the human preferences, and then fine-tuning the large unsupervised LM using reinforcement learning to maximize this estimated reward without drifting too far from the original model. In this paper we introduce a new parameterization of the reward model in RLHF that enables extraction of the corresponding optimal policy in closed form, allowing us to solve the standard RLHF problem with only a simple classification loss. The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant, and computationally lightweight, eliminating the need for sampling from the LM during fine-tuning or performing significant hyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods. Notably, fine-tuning with DPO exceeds PPO-based RLHF in ability to control sentiment of generations, and matches or improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.",True,True,"Rafailov, Rafael and Sharma, Archit and Mitchell, Eric and Manning, Christopher D and Ermon, Stefano and Finn, Chelsea",2024.0,,,,Advances in Neural Information Processing Systems,"Direct Preference Optimization: Your Language Model is Secretly a Reward Model",Direct Preference Optimization: Your Language Model is Secretly a ...,https://arxiv.org/abs/2305.18290,"**arXiv:2305.18290** (cs) View a PDF of the paper titled Direct Preference Optimization: Your Language Model is Secretly a Reward Model, by Rafael Rafailov and 5 other authors View a PDF of the paper titled Direct Preference Optimization: Your Language Model is Secretly a Reward Model, by Rafael Rafailov and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle " Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,Wallace_2024_CVPR,\cite{Wallace_2024_CVPR},Diffusion Model Alignment Using Direct Preference Optimization,http://arxiv.org/abs/2311.12908v1,"Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences. In contrast to LLMs, human preference learning has not been widely explored in text-to-image diffusion models; the best existing approach is to fine-tune a pretrained model using carefully curated high quality images and captions to improve visual appeal and text alignment. We propose Diffusion-DPO, a method to align diffusion models to human preferences by directly optimizing on human comparison data. Diffusion-DPO is adapted from the recently developed Direct Preference Optimization (DPO), a simpler alternative to RLHF which directly optimizes a policy that best satisfies human preferences under a classification objective. We re-formulate DPO to account for a diffusion model notion of likelihood, utilizing the evidence lower bound to derive a differentiable objective. Using the Pick-a-Pic dataset of 851K crowdsourced pairwise preferences, we fine-tune the base model of the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with Diffusion-DPO. Our fine-tuned base model significantly outperforms both base SDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement model in human evaluation, improving visual appeal and prompt alignment. We also develop a variant that uses AI feedback and has comparable performance to training on human preferences, opening the door for scaling of diffusion model alignment methods.",True,True,"Wallace, Bram and Dang, Meihua and Rafailov, Rafael and Zhou, Linqi and Lou, Aaron and Purushwalkam, Senthil and Ermon, Stefano and Xiong, Caiming and Joty, Shafiq and Naik, Nikhil",2024.0,June,,,,Diffusion Model Alignment Using Direct Preference Optimization,Diffusion Model Alignment Using Direct Preference Optimization,http://arxiv.org/pdf/2311.12908v1,"Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences. In contrast to LLMs, human preference learning has not been widely explored in text-to-image diffusion models; the best existing approach is to fine-tune a pretrained model using carefully curated high quality images and captions to improve visual appeal and text alignment. We propose Diffusion-DPO, a method to align diffusion models to human preferences by directly optimizing on human comparison data. Diffusion-DPO is adapted from the recently developed Direct Preference Optimization (DPO), a simpler alternative to RLHF which directly optimizes a policy that best satisfies human preferences under a classification objective. We re-formulate DPO to account for a diffusion model notion of likelihood, utilizing the evidence lower bound to derive a differentiable objective. Using the Pick-a-Pic dataset of 851K crowdsourced pairwise preferences, we fine-tune the base model of the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with Diffusion-DPO. Our fine-tuned base model significantly outperforms both base SDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement model in human evaluation, improving visual appeal and prompt alignment. We also develop a variant that uses AI feedback and has comparable performance to training on human preferences, opening the door for scaling of diffusion model alignment methods." Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,zhou2024antigen,\cite{zhou2024antigen},"Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization",http://arxiv.org/abs/2403.16576v3,"Antibody design, a crucial task with significant implications across various disciplines such as therapeutics and biology, presents considerable challenges due to its intricate nature. In this paper, we tackle antigen-specific antibody sequence-structure co-design as an optimization problem towards specific preferences, considering both rationality and functionality. Leveraging a pre-trained conditional diffusion model that jointly models sequences and structures of antibodies with equivariant neural networks, we propose direct energy-based preference optimization to guide the generation of antibodies with both rational structures and considerable binding affinities to given antigens. Our method involves fine-tuning the pre-trained diffusion model using a residue-level decomposed energy preference. Additionally, we employ gradient surgery to address conflicts between various types of energy, such as attraction and repulsion. Experiments on RAbD benchmark show that our approach effectively optimizes the energy of generated antibodies and achieves state-of-the-art performance in designing high-quality antibodies with low total energy and high binding affinity simultaneously, demonstrating the superiority of our approach.",True,True,"Zhou, Xiangxin and Xue, Dongyu and Chen, Ruizhe and Zheng, Zaixiang and Wang, Liang and Gu, Quanquan",2024.0,,,,arXiv preprint arXiv:2403.16576,"Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization",Antigen-Specific Antibody Design via Direct Energy-based ...,https://openreview.net/forum?id=GN2GXjPyN8&referrer=%5Bthe%20profile%20of%20Xiangxin%20Zhou%5D(%2Fprofile%3Fid%3D~Xiangxin_Zhou1),"Summary: This paper applies direct preference optimization to antibody design. Specifically, it uses Rosetta binding energy to guide a pre-trained diffusion" Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,alford2017rosetta,\cite{alford2017rosetta},The Rosetta all-atom energy function for macromolecular modeling and design,,,True,False,"Alford, Rebecca F and Leaver-Fay, Andrew and Jeliazkov, Jeliazko R and O’Meara, Matthew J and DiMaio, Frank P and Park, Hahnbeom and Shapovalov, Maxim V and Renfrew, P Douglas and Mulligan, Vikram K and Kappel, Kalli and others",2017.0,,,,Journal of chemical theory and computation,The Rosetta all-atom energy function for macromolecular modeling and design,The Rosetta all-atom energy function for macromolecular ...,https://pmc.ncbi.nlm.nih.gov/articles/PMC5717763/,"by RF Alford · 2017 · Cited by 1630 — The goal of this paper is to describe the energy calculations used by the Rosetta macromolecular modeling program: we explain the underlying physical concepts," Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,gu2024aligning,\cite{gu2024aligning},"Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization",http://arxiv.org/abs/2407.01648v2,"Generating ligand molecules for specific protein targets, known as structure-based drug design, is a fundamental problem in therapeutics development and biological discovery. Recently, target-aware generative models, especially diffusion models, have shown great promise in modeling protein-ligand interactions and generating candidate drugs. However, existing models primarily focus on learning the chemical distribution of all drug candidates, which lacks effective steerability on the chemical quality of model generations. In this paper, we propose a novel and general alignment framework to align pretrained target diffusion models with preferred functional properties, named AliDiff. AliDiff shifts the target-conditioned chemical distribution towards regions with higher binding affinity and structural rationality, specified by user-defined reward functions, via the preference optimization approach. To avoid the overfitting problem in common preference optimization objectives, we further develop an improved Exact Energy Preference Optimization method to yield an exact and efficient alignment of the diffusion models, and provide the closed-form expression for the converged distribution. Empirical studies on the CrossDocked2020 benchmark show that AliDiff can generate molecules with state-of-the-art binding energies with up to -7.07 Avg. Vina Score, while maintaining strong molecular properties. Code is available at https://github.com/MinkaiXu/AliDiff.",True,True,"Gu, Siyi and Xu, Minkai and Powers, Alexander and Nie, Weili and Geffner, Tomas and Kreis, Karsten and Leskovec, Jure and Vahdat, Arash and Ermon, Stefano",2024.0,,,,arXiv preprint arXiv:2407.01648,"Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization",[PDF] Aligning Target-Aware Molecule Diffusion Models with Exact Energy ...,https://proceedings.neurips.cc/paper_files/paper/2024/file/4ddfe69f164eae70abc86f0f9cbed7e8-Paper-Conference.pdf,"ALIDIFF aligns target-aware diffusion models with preferred properties, shifting chemical distribution towards higher binding affinity and structural" Aligning Protein Conformation Ensemble Generation with Physical Feedback,2505.24203v1,cheng2024decomposed,\cite{cheng2024decomposed},"Decomposed Direct Preference Optimization for Structure-Based Drug Design",http://arxiv.org/abs/2407.13981v2,"Diffusion models have achieved promising results for Structure-Based Drug Design (SBDD). Nevertheless, high-quality protein subpocket and ligand data are relatively scarce, which hinders the models' generation capabilities. Recently, Direct Preference Optimization (DPO) has emerged as a pivotal tool for aligning generative models with human preferences. In this paper, we propose DecompDPO, a structure-based optimization method aligns diffusion models with pharmaceutical needs using multi-granularity preference pairs. DecompDPO introduces decomposition into the optimization objectives and obtains preference pairs at the molecule or decomposed substructure level based on each objective's decomposability. Additionally, DecompDPO introduces a physics-informed energy term to ensure reasonable molecular conformations in the optimization results. Notably, DecompDPO can be effectively used for two main purposes: (1) fine-tuning pretrained diffusion models for molecule generation across various protein families, and (2) molecular optimization given a specific protein subpocket after generation. Extensive experiments on the CrossDocked2020 benchmark show that DecompDPO significantly improves model performance, achieving up to 95.2% Med. High Affinity and a 36.2% success rate for molecule generation, and 100% Med. High Affinity and a 52.1% success rate for molecular optimization.",True,True,"Cheng, Xiwei and Zhou, Xiangxin and Yang, Yuwei and Bao, Yu and Gu, Quanquan",2024.0,,,,arXiv preprint arXiv:2407.13981,"Decomposed Direct Preference Optimization for Structure-Based Drug Design",Decomposed Direct Preference Optimization for Structure ...,https://arxiv.org/abs/2407.13981,"Image 2: arxiv logo>q-bio> arXiv:2407.13981 **arXiv:2407.13981** (q-bio) View a PDF of the paper titled Decomposed Direct Preference Optimization for Structure-Based Drug Design, by Xiwei Cheng and 4 other authors (or arXiv:2407.13981v2 [q-bio.BM] for this version) View a PDF of the paper titled Decomposed Direct Preference Optimization for Structure-Based Drug Design, by Xiwei Cheng and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,GPT4,\cite{GPT4},{GPT-4} Technical Report,,,True,False,OpenAI,2023.0,,https://doi.org/10.48550/arXiv.2303.08774,10.48550/ARXIV.2303.08774,CoRR,{GPT-4} Technical Report,(PDF) GPT-4 Technical Report - ResearchGate,https://www.researchgate.net/publication/383739523_GPT-4_Technical_Report,"We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,PaLM,\cite{PaLM},PaLM: Scaling Language Modeling with Pathways,http://arxiv.org/abs/2204.02311v5,"Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.",True,True,"Aakanksha Chowdhery and Sharan Narang and Jacob Devlin and Maarten Bosma and Gaurav Mishra and Adam Roberts and Paul Barham and Hyung Won Chung and Charles Sutton and Sebastian Gehrmann and Parker Schuh and Kensen Shi and Sasha Tsvyashchenko and Joshua Maynez and Abhishek Rao and Parker Barnes and Yi Tay and Noam Shazeer and Vinodkumar Prabhakaran and Emily Reif and Nan Du and Ben Hutchinson and Reiner Pope and James Bradbury and Jacob Austin and Michael Isard and Guy Gur{-}Ari and Pengcheng Yin and Toju Duke and Anselm Levskaya and Sanjay Ghemawat and Sunipa Dev and Henryk Michalewski and Xavier Garcia and Vedant Misra and Kevin Robinson and Liam Fedus and Denny Zhou and Daphne Ippolito and David Luan and Hyeontaek Lim and Barret Zoph and Alexander Spiridonov and Ryan Sepassi and David Dohan and Shivani Agrawal and Mark Omernick and Andrew M. Dai and Thanumalayan Sankaranarayana Pillai and Marie Pellat and Aitor Lewkowycz and Erica Moreira and Rewon Child and Oleksandr Polozov and Katherine Lee and Zongwei Zhou and Xuezhi Wang and Brennan Saeta and Mark Diaz and Orhan Firat and Michele Catasta and Jason Wei and Kathy Meier{-}Hellstern and Douglas Eck and Jeff Dean and Slav Petrov and Noah Fiedel",2023.0,,http://jmlr.org/papers/v24/22-1144.html,,J. Mach. Learn. Res.,PaLM: Scaling Language Modeling with Pathways,PaLM: Scaling Language Modeling with Pathways,http://arxiv.org/pdf/2204.02311v5,"Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,llama,\cite{llama},LLaMA: Open and Efficient Foundation Language Models,http://arxiv.org/abs/2302.13971v1,"We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.",True,True,"Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie{-}Anne Lachaux and Timoth{\'{e}}e Lacroix and Baptiste Rozi{\`{e}}re and Naman Goyal and Eric Hambro and Faisal Azhar and Aur{\'{e}}lien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample",2023.0,,https://doi.org/10.48550/arXiv.2302.13971,10.48550/ARXIV.2302.13971,CoRR,LLaMA: Open and Efficient Foundation Language Models,LLaMA: Open and Efficient Foundation Language Models,http://arxiv.org/pdf/2302.13971v1,"We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,Llama_2,\cite{Llama_2},Llama 2: Open Foundation and Fine-Tuned Chat Models,http://arxiv.org/abs/2307.09288v2,"In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.",True,True,"Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton{-}Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie{-}Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aur{\'{e}}lien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom",2023.0,,https://doi.org/10.48550/arXiv.2307.09288,10.48550/ARXIV.2307.09288,CoRR,Llama 2: Open Foundation and Fine-Tuned Chat Models,Llama 2: Open Foundation and Fine-Tuned Chat Models - Meta AI,https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/,"We develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,anthropic_claude,\cite{anthropic_claude},Claude: A Family of AI Models,,,True,False,Anthropic,2024.0,,https://www.anthropic.com/product,,,Claude: A Family of AI Models,Introducing the next generation of Claude - Anthropic,https://www.anthropic.com/news/claude-3-family,"The family includes three state-of-the-art models in ascending order of capability: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. ### Claude 3 model family The Claude 3 models can power live customer chats, auto-completions, and data extraction tasks where responses must be immediate and in real-time. We’ve developed the Claude 3 family of models to be as trustworthy as they are capable. While the Claude 3 model family has advanced on key measures of biological knowledge, cyber-related knowledge, and autonomy compared to previous models, it remains at AI Safety Level 2 (ASL-2) per our Responsible Scaling Policy. **Claude 3 Opus**is our most intelligent model, with best-in-market performance on highly complex tasks. **Claude 3 Haiku** is our fastest, most compact model for near-instant responsiveness. ### Claude models" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,gemma,\cite{gemma},Gemma: Open Models Based on Gemini Research and Technology,http://arxiv.org/abs/2403.08295v4,"This work introduces Gemma, a family of lightweight, state-of-the art open models built from the research and technology used to create Gemini models. Gemma models demonstrate strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations.",True,True,"Thomas Mesnard and Cassidy Hardin and Robert Dadashi and Surya Bhupatiraju and Shreya Pathak and Laurent Sifre and Morgane Rivi{\`{e}}re and Mihir Sanjay Kale and Juliette Love and Pouya Tafti and L{\'{e}}onard Hussenot and Aakanksha Chowdhery and Adam Roberts and Aditya Barua and Alex Botev and Alex Castro{-}Ros and Ambrose Slone and Am{\'{e}}lie H{\'{e}}liou and Andrea Tacchetti and Anna Bulanova and Antonia Paterson and Beth Tsai and Bobak Shahriari and Charline Le Lan and Christopher A. Choquette{-}Choo and Cl{\'{e}}ment Crepy and Daniel Cer and Daphne Ippolito and David Reid and Elena Buchatskaya and Eric Ni and Eric Noland and Geng Yan and George Tucker and George{-}Christian Muraru and Grigory Rozhdestvenskiy and Henryk Michalewski and Ian Tenney and Ivan Grishchenko and Jacob Austin and James Keeling and Jane Labanowski and Jean{-}Baptiste Lespiau and Jeff Stanway and Jenny Brennan and Jeremy Chen and Johan Ferret and Justin Chiu and et al.",2024.0,,https://doi.org/10.48550/arXiv.2403.08295,10.48550/ARXIV.2403.08295,CoRR,Gemma: Open Models Based on Gemini Research and Technology,Gemma: Open Models Based on Gemini Research and Technology,http://arxiv.org/pdf/2403.08295v4,"This work introduces Gemma, a family of lightweight, state-of-the art open models built from the research and technology used to create Gemini models. Gemma models demonstrate strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,AIIndex2024,\cite{AIIndex2024},AI Index Report,,,True,False,stanford,,,,,,AI Index Report,AI Index 2025: State of AI in 10 Charts | Stanford HAI,https://hai.stanford.edu/news/ai-index-2025-state-of-ai-in-10-charts,"* AI Index * AI Index Report The 2025 AI Index Report, published on April 7, 2025, is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. Each year, the report covers the biggest technical advances, new achievements in benchmarking, investment flowing into generative AI, education trends, legislation around this technology, and more. Experts from Stanford HAI and top universities urge policymakers to prioritize scientific understanding to govern frontier AI. Experts from Stanford HAI and top universities urge policymakers to prioritize scientific understanding to govern frontier AI. AI Index 2025: State of AI in 10 Charts | Stanford HAI" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,apple_ai,\cite{apple_ai},Apple Intelligence,,,True,False,Apple,2024.0,,,,,Apple Intelligence,How to get Apple Intelligence,https://support.apple.com/en-us/121115,"* Apple * Apple Watch * Explore All Apple Watch * Why Apple Watch * Shop Apple Watch * Apple Watch Support * Shop Apple Vision Pro * Apple Vision Pro Support * Apple TV Support * Apple One * Apple TV+ Support * Apple Watch * Made by Apple Apple Intelligence is available in beta starting with iOS 18.1, iPadOS 18.1, macOS Sequoia 15.1, and visionOS 2.4 after your device language and Siri language are set to the same supported language. Apple Intelligence requirements for iPhone, iPad, Mac, and Apple Vision Pro Apple Intelligence is available with iOS 18.4 and iPadOS 18.4 or later on supported iPhone and iPad models, and with macOS Sequoia 15.1 or later on supported Mac models." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,song2024powerinfer,\cite{song2024powerinfer},PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU,http://arxiv.org/abs/2312.12456v2,"This paper introduces PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key principle underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power-law distribution in neuron activation. This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity. The evaluation shows that PowerInfer significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU. For the OPT-30B model, PowerInfer achieves performance comparable to that of a high-end server-grade A100 GPU, reaching 82% of its token generation rate on a single consumer-grade RTX 4090 GPU.",True,True,"Song, Yixin and Mi, Zeyu and Xie, Haotong and Chen, Haibo",2024.0,,,,,PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU,Fast Large Language Model Serving with a Consumer-grade GPU,https://arxiv.org/abs/2312.12456,"This paper introduces PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,yuan2023mobile,\cite{yuan2023mobile},Mobile Foundation Model as Firmware,http://arxiv.org/abs/2308.14363v3,"In today's landscape, smartphones have evolved into hubs for hosting a multitude of deep learning models aimed at local execution. A key realization driving this work is the notable fragmentation among these models, characterized by varied architectures, operators, and implementations. This fragmentation imposes a significant burden on the comprehensive optimization of hardware, system settings, and algorithms. Buoyed by the recent strides in large foundation models, this work introduces a pioneering paradigm for mobile AI: a collaborative management approach between the mobile OS and hardware, overseeing a foundational model capable of serving a broad spectrum of mobile AI tasks, if not all. This foundational model resides within the NPU and remains impervious to app or OS revisions, akin to firmware. Concurrently, each app contributes a concise, offline fine-tuned ""adapter"" tailored to distinct downstream tasks. From this concept emerges a concrete instantiation known as \sys. It amalgamates a curated selection of publicly available Large Language Models (LLMs) and facilitates dynamic data flow. This concept's viability is substantiated through the creation of an exhaustive benchmark encompassing 38 mobile AI tasks spanning 50 datasets, including domains such as Computer Vision (CV), Natural Language Processing (NLP), audio, sensing, and multimodal inputs. Spanning this benchmark, \sys unveils its impressive performance. It attains accuracy parity in 85\% of tasks, demonstrates improved scalability in terms of storage and memory, and offers satisfactory inference speed on Commercial Off-The-Shelf (COTS) mobile devices fortified with NPU support. This stands in stark contrast to task-specific models tailored for individual applications.",True,True,"Yuan, Jinliang and Yang, Chen and Cai, Dongqi and Wang, Shihe and Yuan, Xin and Zhang, Zeling and Li, Xiang and Zhang, Dingge and Mei, Hanzi and Jia, Xianqing and others",2023.0,,,,arXiv preprint arXiv:2308.14363,Mobile Foundation Model as Firmware,Mobile Foundation Model as Firmware,http://arxiv.org/pdf/2308.14363v3,"In today's landscape, smartphones have evolved into hubs for hosting a multitude of deep learning models aimed at local execution. A key realization driving this work is the notable fragmentation among these models, characterized by varied architectures, operators, and implementations. This fragmentation imposes a significant burden on the comprehensive optimization of hardware, system settings, and algorithms. Buoyed by the recent strides in large foundation models, this work introduces a pioneering paradigm for mobile AI: a collaborative management approach between the mobile OS and hardware, overseeing a foundational model capable of serving a broad spectrum of mobile AI tasks, if not all. This foundational model resides within the NPU and remains impervious to app or OS revisions, akin to firmware. Concurrently, each app contributes a concise, offline fine-tuned ""adapter"" tailored to distinct downstream tasks. From this concept emerges a concrete instantiation known as \sys. It amalgamates a curated selection of publicly available Large Language Models (LLMs) and facilitates dynamic data flow. This concept's viability is substantiated through the creation of an exhaustive benchmark encompassing 38 mobile AI tasks spanning 50 datasets, including domains such as Computer Vision (CV), Natural Language Processing (NLP), audio, sensing, and multimodal inputs. Spanning this benchmark, \sys unveils its impressive performance. It attains accuracy parity in 85\% of tasks, demonstrates improved scalability in terms of storage and memory, and offers satisfactory inference speed on Commercial Off-The-Shelf (COTS) mobile devices fortified with NPU support. This stands in stark contrast to task-specific models tailored for individual applications." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,app_3,\cite{app_3},"Drive Like a Human: Rethinking Autonomous Driving with Large Language Models",http://arxiv.org/abs/2307.07162v1,"In this paper, we explore the potential of using a large language model (LLM) to understand the driving environment in a human-like manner and analyze its ability to reason, interpret, and memorize when facing complex scenarios. We argue that traditional optimization-based and modular autonomous driving (AD) systems face inherent performance limitations when dealing with long-tail corner cases. To address this problem, we propose that an ideal AD system should drive like a human, accumulating experience through continuous driving and using common sense to solve problems. To achieve this goal, we identify three key abilities necessary for an AD system: reasoning, interpretation, and memorization. We demonstrate the feasibility of employing an LLM in driving scenarios by building a closed-loop system to showcase its comprehension and environment-interaction abilities. Our extensive experiments show that the LLM exhibits the impressive ability to reason and solve long-tailed cases, providing valuable insights for the development of human-like autonomous driving. The related code are available at https://github.com/PJLab-ADG/DriveLikeAHuman .",True,True,"Fu, Daocheng and Li, Xin and Wen, Licheng and Dou, Min and Cai, Pinlong and Shi, Botian and Qiao, Yu",2024.0,,,,,"Drive Like a Human: Rethinking Autonomous Driving with Large Language Models",Drive Like a Human: Rethinking Autonomous Driving with ...,https://www.computer.org/csdl/proceedings-article/wacvw/2024/702800a910/1WbOYt2kbVC,"by D Fu · 2024 · Cited by 238 — In this paper, we explore the potential of using a large language model (LLM) to understand the driving environment in a human-like manner." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,app5_robot,\cite{app5_robot},"BAT: Behavior-Aware Human-Like Trajectory Prediction for Autonomous Driving",http://arxiv.org/abs/2312.06371v2,"The ability to accurately predict the trajectory of surrounding vehicles is a critical hurdle to overcome on the journey to fully autonomous vehicles. To address this challenge, we pioneer a novel behavior-aware trajectory prediction model (BAT) that incorporates insights and findings from traffic psychology, human behavior, and decision-making. Our model consists of behavior-aware, interaction-aware, priority-aware, and position-aware modules that perceive and understand the underlying interactions and account for uncertainty and variability in prediction, enabling higher-level learning and flexibility without rigid categorization of driving behavior. Importantly, this approach eliminates the need for manual labeling in the training process and addresses the challenges of non-continuous behavior labeling and the selection of appropriate time windows. We evaluate BAT's performance across the Next Generation Simulation (NGSIM), Highway Drone (HighD), Roundabout Drone (RounD), and Macao Connected Autonomous Driving (MoCAD) datasets, showcasing its superiority over prevailing state-of-the-art (SOTA) benchmarks in terms of prediction accuracy and efficiency. Remarkably, even when trained on reduced portions of the training data (25%), our model outperforms most of the baselines, demonstrating its robustness and efficiency in predicting vehicle trajectories, and the potential to reduce the amount of data required to train autonomous vehicles, especially in corner cases. In conclusion, the behavior-aware model represents a significant advancement in the development of autonomous vehicles capable of predicting trajectories with the same level of proficiency as human drivers. The project page is available at https://github.com/Petrichor625/BATraj-Behavior-aware-Model.",True,True,"Haicheng Liao and Zhenning Li and Huanming Shen and Wenxuan Zeng and Dongping Liao and Guofa Li and Shengbo Eben Li and Chengzhong Xu",2023.0,,https://doi.org/10.48550/arXiv.2312.06371,10.48550/ARXIV.2312.06371,CoRR,"BAT: Behavior-Aware Human-Like Trajectory Prediction for Autonomous Driving",Behavior-Aware Human-Like Trajectory Prediction for Autonomous ...,https://github.com/Petrichor625/BATraj-Behavior-aware-Model,Introducing a real-time dynamic geometric graph method for the continuous representation of driving behavior in trajectory prediction for autonomous driving. "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,app6_laptops,\cite{app6_laptops},Creating Large Language Models on Your Laptop,,,True,False,Xinyu Ye and Zhe Wang and Haihao Shen and Yu Luo and Hanwen Chang,2023.0,,,,,Creating Large Language Models on Your Laptop,How to run an LLM on your laptop,https://www.technologyreview.com/2025/07/17/1120391/how-to-run-an-llm-on-your-laptop/,It's now possible to run useful models from the safety and comfort of your own computer. Here's how. "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,ExeGPT,\cite{ExeGPT},ExeGPT: Constraint-Aware Resource Scheduling for LLM Inference,http://arxiv.org/abs/2404.07947v1,"This paper presents ExeGPT, a distributed system designed for constraint-aware LLM inference. ExeGPT finds and runs with an optimal execution schedule to maximize inference throughput while satisfying a given latency constraint. By leveraging the distribution of input and output sequences, it effectively allocates resources and determines optimal execution configurations, including batch sizes and partial tensor parallelism. We also introduce two scheduling strategies based on Round-Robin Allocation and Workload-Aware Allocation policies, suitable for different NLP workloads. We evaluate ExeGPT on six LLM instances of T5, OPT, and GPT-3 and five NLP tasks, each with four distinct latency constraints. Compared to FasterTransformer, ExeGPT achieves up to 15.2x improvements in throughput and 6x improvements in latency. Overall, ExeGPT achieves an average throughput gain of 2.9x across twenty evaluation scenarios. Moreover, when adapting to changing sequence distributions, the cost of adjusting the schedule in ExeGPT is reasonably modest. ExeGPT proves to be an effective solution for optimizing and executing LLM inference for diverse NLP workload and serving conditions.",True,True,"Hyungjun Oh and Kihong Kim and Jaemin Kim and Sungkyun Kim and Junyeol Lee and Du{-}seong Chang and Jiwon Seo",2024.0,,https://doi.org/10.1145/3620665.3640383,10.1145/3620665.3640383,,ExeGPT: Constraint-Aware Resource Scheduling for LLM Inference,ASPLOS'24 - Lightning Talks - Session 2D - ExeGPT: Constraint ...,https://www.youtube.com/watch?v=UhBwDpY4hV4,"... Inference Systems Paper Title: ExeGPT: Constraint-Aware Resource Scheduling for LLM Inference Authors: Hyungjun Oh, Kihong Kim, Jaemin Kim" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,Splitwise,\cite{Splitwise},Splitwise: Efficient generative LLM inference using phase splitting,http://arxiv.org/abs/2311.18677v2,"Recent innovations in generative large language models (LLMs) have made their applications and use-cases ubiquitous. This has led to large-scale deployments of these models, using complex, expensive, and power-hungry AI accelerators, most commonly GPUs. These developments make LLM inference efficiency an important challenge. Based on our extensive characterization, we find that there are two main phases during an LLM inference request: a compute-intensive prompt computation, and a memory-intensive token generation, each with distinct latency, throughput, memory, and power characteristics. Despite state-of-the-art batching and scheduling, the token generation phase underutilizes compute resources. Specifically, unlike compute-intensive prompt computation phases, token generation phases do not require the compute capability of the latest GPUs, and can be run with lower power and cost. With Splitwise, we propose splitting the two phases of a LLM inference request on to separate machines. This allows us to use hardware that is well-suited for each phase, and provision resources independently per phase. However, splitting an inference request across machines requires state transfer from the machine running prompt computation over to the machine generating tokens. We implement and optimize this state transfer using the fast back-plane interconnects available in today's GPU clusters. We use the Splitwise technique to design LLM inference clusters using the same or different types of machines for the prompt computation and token generation phases. Our clusters are optimized for three key objectives: throughput, cost, and power. In particular, we show that we can achieve 1.4x higher throughput at 20% lower cost than current designs. Alternatively, we can achieve 2.35x more throughput with the same cost and power budgets.",True,True,"Pratyush Patel and Esha Choukse and Chaojie Zhang and {\'{I}}{\~{n}}igo Goiri and Aashaka Shah and Saeed Maleki and Ricardo Bianchini",2023.0,,https://doi.org/10.48550/arXiv.2311.18677,10.48550/ARXIV.2311.18677,CoRR,Splitwise: Efficient generative LLM inference using phase splitting,Splitwise: Efficient generative LLM inference using phase splitting,http://arxiv.org/pdf/2311.18677v2,"Recent innovations in generative large language models (LLMs) have made their applications and use-cases ubiquitous. This has led to large-scale deployments of these models, using complex, expensive, and power-hungry AI accelerators, most commonly GPUs. These developments make LLM inference efficiency an important challenge. Based on our extensive characterization, we find that there are two main phases during an LLM inference request: a compute-intensive prompt computation, and a memory-intensive token generation, each with distinct latency, throughput, memory, and power characteristics. Despite state-of-the-art batching and scheduling, the token generation phase underutilizes compute resources. Specifically, unlike compute-intensive prompt computation phases, token generation phases do not require the compute capability of the latest GPUs, and can be run with lower power and cost. With Splitwise, we propose splitting the two phases of a LLM inference request on to separate machines. This allows us to use hardware that is well-suited for each phase, and provision resources independently per phase. However, splitting an inference request across machines requires state transfer from the machine running prompt computation over to the machine generating tokens. We implement and optimize this state transfer using the fast back-plane interconnects available in today's GPU clusters. We use the Splitwise technique to design LLM inference clusters using the same or different types of machines for the prompt computation and token generation phases. Our clusters are optimized for three key objectives: throughput, cost, and power. In particular, we show that we can achieve 1.4x higher throughput at 20% lower cost than current designs. Alternatively, we can achieve 2.35x more throughput with the same cost and power budgets." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,PagedAttention,\cite{PagedAttention},"Efficient Memory Management for Large Language Model Serving with PagedAttention",http://arxiv.org/abs/2309.06180v1,"High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. However, existing systems struggle because the key-value cache (KV cache) memory for each request is huge and grows and shrinks dynamically. When managed inefficiently, this memory can be significantly wasted by fragmentation and redundant duplication, limiting the batch size. To address this problem, we propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce memory usage. Our evaluations show that vLLM improves the throughput of popular LLMs by 2-4$\times$ with the same level of latency compared to the state-of-the-art systems, such as FasterTransformer and Orca. The improvement is more pronounced with longer sequences, larger models, and more complex decoding algorithms. vLLM's source code is publicly available at https://github.com/vllm-project/vllm",True,True,"Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph Gonzalez and Hao Zhang and Ion Stoica",2023.0,,https://doi.org/10.1145/3600006.3613165,10.1145/3600006.3613165,,"Efficient Memory Management for Large Language Model Serving with PagedAttention",Efficient Memory Management for Large Language Model ...,https://arxiv.org/pdf/2309.06180,"Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon 1,∗ Zhuohan Li 1,∗ Siyuan Zhuang 1 Ying Sheng 1,2 Lianmin Zheng 1 Cody Hao Yu 3 Joseph E. Gonzalez 1 Hao Zhang 4 Ion Stoica 1 1 UC Berkeley 2Stanford University 3Independent Researcher 4UC San Diego Abstract High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. To address this problem, we propose PagedAttention, an attention al-gorithm inspired by the classical virtual memory and pag-ing techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce mem-ory usage. To address the above limitations, we propose PagedAt-tention , an attention algorithm inspired by the operating system’s (OS) solution to memory fragmentation and shar-ing: virtual memory with paging . In this work, we build vLLM , a high-throughput distributed LLM serving engine on top of PagedAttention that achieves near-zero waste in KV cache memory." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,Just-in-time,\cite{Just-in-time},"Just-in-time Quantization with Processing-In-Memory for Efficient ML Training",http://arxiv.org/abs/2311.05034v1,"Data format innovations have been critical for machine learning (ML) scaling, which in turn fuels ground-breaking ML capabilities. However, even in the presence of low-precision formats, model weights are often stored in both high-precision and low-precision during training. Furthermore, with emerging directional data formats (e.g., MX9, MX6, etc.) multiple low-precision weight copies can be required. To lower memory capacity needs of weights, we explore just-in-time quantization (JIT-Q) where we only store high-precision weights in memory and generate low-precision weights only when needed. To perform JIT-Q efficiently, in this work, we evaluate emerging processing-in-memory (PIM) technology to execute quantization. With PIM, we can offload quantization to in-memory compute units enabling quantization to be performed without incurring costly data movement while allowing quantization to be concurrent with accelerator computation. Our proposed PIM-offloaded quantization keeps up with GPU compute and delivers considerable capacity savings (up to 24\%) at marginal throughput loss (up to 2.4\%). Said memory capacity savings can unlock several benefits such as fitting larger model in the same system, reducing model parallelism requirement, and improving overall ML training efficiency.",True,True,Mohamed Assem Ibrahim and Shaizeen Aga and Ada Li and Suchita Pati and Mahzabeen Islam,2023.0,,https://arxiv.org/abs/2311.05034,,,"Just-in-time Quantization with Processing-In-Memory for Efficient ML Training",Just-in-time Quantization with Processing-In-Memory for Efficient ML ...,https://arxiv.org/abs/2311.05034,We explore just-in-time quantization (JIT-Q) where we only store high-precision weights in memory and generate low-precision weights only when needed. "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,llm_flash,\cite{llm_flash},"LLM in a flash: Efficient Large Language Model Inference with Limited Memory",http://arxiv.org/abs/2312.11514v3,"Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their substantial computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this hardware-informed framework, we introduce two principal techniques. First, ""windowing"" strategically reduces data transfer by reusing previously activated neurons, and second, ""row-column bundling"", tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.",True,True,"Keivan Alizadeh and Iman Mirzadeh and Dmitry Belenko and Karen Khatamifard and Minsik Cho and Carlo C. Del Mundo and Mohammad Rastegari and Mehrdad Farajtabar",2023.0,,https://doi.org/10.48550/arXiv.2312.11514,10.48550/ARXIV.2312.11514,CoRR,"LLM in a flash: Efficient Large Language Model Inference with Limited Memory",LLM in a Flash: Efficient Large Language Model Inference with ...,https://machinelearning.apple.com/research/efficient-large-language,"# LLM in a Flash: Efficient Large Language Model Inference with Limited Memory This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,jawahar2023llm,\cite{jawahar2023llm},LLM Performance Predictors are good initializers for Architecture Search,http://arxiv.org/abs/2310.16712v2,"In this work, we utilize Large Language Models (LLMs) for a novel use case: constructing Performance Predictors (PP) that estimate the performance of specific deep neural network architectures on downstream tasks. We create PP prompts for LLMs, comprising (i) role descriptions, (ii) instructions for the LLM, (iii) hyperparameter definitions, and (iv) demonstrations presenting sample architectures with efficiency metrics and `training from scratch' performance. In machine translation (MT) tasks, GPT-4 with our PP prompts (LLM-PP) achieves a SoTA mean absolute error and a slight degradation in rank correlation coefficient compared to baseline predictors. Additionally, we demonstrate that predictions from LLM-PP can be distilled to a compact regression model (LLM-Distill-PP), which surprisingly retains much of the performance of LLM-PP. This presents a cost-effective alternative for resource-intensive performance estimation. Specifically, for Neural Architecture Search (NAS), we introduce a Hybrid-Search algorithm (HS-NAS) employing LLM-Distill-PP for the initial search stages and reverting to the baseline predictor later. HS-NAS performs similarly to SoTA NAS, reducing search hours by approximately 50%, and in some cases, improving latency, GFLOPs, and model size. The code can be found at: https://github.com/UBC-NLP/llmas.",True,True,"Jawahar, Ganesh and Abdul-Mageed, Muhammad and Lakshmanan, Laks VS and Ding, Dujian",2023.0,,,,arXiv preprint arXiv:2310.16712,LLM Performance Predictors are good initializers for Architecture Search,LLM Performance Predictors are good initializers for Architecture Search,http://arxiv.org/pdf/2310.16712v2,"In this work, we utilize Large Language Models (LLMs) for a novel use case: constructing Performance Predictors (PP) that estimate the performance of specific deep neural network architectures on downstream tasks. We create PP prompts for LLMs, comprising (i) role descriptions, (ii) instructions for the LLM, (iii) hyperparameter definitions, and (iv) demonstrations presenting sample architectures with efficiency metrics and `training from scratch' performance. In machine translation (MT) tasks, GPT-4 with our PP prompts (LLM-PP) achieves a SoTA mean absolute error and a slight degradation in rank correlation coefficient compared to baseline predictors. Additionally, we demonstrate that predictions from LLM-PP can be distilled to a compact regression model (LLM-Distill-PP), which surprisingly retains much of the performance of LLM-PP. This presents a cost-effective alternative for resource-intensive performance estimation. Specifically, for Neural Architecture Search (NAS), we introduce a Hybrid-Search algorithm (HS-NAS) employing LLM-Distill-PP for the initial search stages and reverting to the baseline predictor later. HS-NAS performs similarly to SoTA NAS, reducing search hours by approximately 50%, and in some cases, improving latency, GFLOPs, and model size. The code can be found at: https://github.com/UBC-NLP/llmas." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,huang2024new,\cite{huang2024new},"New Solutions on LLM Acceleration, Optimization, and Application",http://arxiv.org/abs/2406.10903v1,"Large Language Models (LLMs) have become extremely potent instruments with exceptional capacities for comprehending and producing human-like text in a wide range of applications. However, the increasing size and complexity of LLMs present significant challenges in both training and deployment, leading to substantial computational and storage costs as well as heightened energy consumption. In this paper, we provide a review of recent advancements and research directions aimed at addressing these challenges and enhancing the efficiency of LLM-based systems. We begin by discussing algorithm-level acceleration techniques focused on optimizing LLM inference speed and resource utilization. We also explore LLM-hardware co-design strategies with a vision to improve system efficiency by tailoring hardware architectures to LLM requirements. Further, we delve into LLM-to-accelerator compilation approaches, which involve customizing hardware accelerators for efficient LLM deployment. Finally, as a case study to leverage LLMs for assisting circuit design, we examine LLM-aided design methodologies for an important task: High-Level Synthesis (HLS) functional verification, by creating a new dataset that contains a large number of buggy and bug-free codes, which can be essential for training LLMs to specialize on HLS verification and debugging. For each aspect mentioned above, we begin with a detailed background study, followed by the presentation of several novel solutions proposed to overcome specific challenges. We then outline future research directions to drive further advancements. Through these efforts, we aim to pave the way for more efficient and scalable deployment of LLMs across a diverse range of applications.",True,True,"Huang, Yingbing and Wan, Lily Jiaxin and Ye, Hanchen and Jha, Manvi and Wang, Jinghua and Li, Yuhong and Zhang, Xiaofan and Chen, Deming",2024.0,,,,,"New Solutions on LLM Acceleration, Optimization, and Application","New Solutions on LLM Acceleration, Optimization, and Application",http://arxiv.org/pdf/2406.10903v1,"Large Language Models (LLMs) have become extremely potent instruments with exceptional capacities for comprehending and producing human-like text in a wide range of applications. However, the increasing size and complexity of LLMs present significant challenges in both training and deployment, leading to substantial computational and storage costs as well as heightened energy consumption. In this paper, we provide a review of recent advancements and research directions aimed at addressing these challenges and enhancing the efficiency of LLM-based systems. We begin by discussing algorithm-level acceleration techniques focused on optimizing LLM inference speed and resource utilization. We also explore LLM-hardware co-design strategies with a vision to improve system efficiency by tailoring hardware architectures to LLM requirements. Further, we delve into LLM-to-accelerator compilation approaches, which involve customizing hardware accelerators for efficient LLM deployment. Finally, as a case study to leverage LLMs for assisting circuit design, we examine LLM-aided design methodologies for an important task: High-Level Synthesis (HLS) functional verification, by creating a new dataset that contains a large number of buggy and bug-free codes, which can be essential for training LLMs to specialize on HLS verification and debugging. For each aspect mentioned above, we begin with a detailed background study, followed by the presentation of several novel solutions proposed to overcome specific challenges. We then outline future research directions to drive further advancements. Through these efforts, we aim to pave the way for more efficient and scalable deployment of LLMs across a diverse range of applications." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,liu2024optimizing,\cite{liu2024optimizing},Optimizing LLM Queries in Relational Data Analytics Workloads,http://arxiv.org/abs/2403.05821v2,"Batch data analytics is a growing application for Large Language Models (LLMs). LLMs enable users to perform a wide range of natural language tasks, such as classification, entity extraction, and translation, over large datasets. However, LLM inference is highly costly and slow: for example, an NVIDIA L4 GPU running Llama3-8B can only process 6 KB of text per second, taking about a day to handle 15 GB of data; processing a similar amount of data costs around $10K on OpenAI's GPT-4o. In this paper, we propose novel techniques that can significantly reduce the cost of LLM calls for relational data analytics workloads. Our key contribution is developing efficient algorithms for reordering the rows and the fields within each row of an input table to maximize key-value (KV) cache reuse when performing LLM serving. As such, our approach can be easily applied to existing analytics systems and serving platforms. Our evaluation shows that our solution can yield up to 3.4x improvement in job completion time on a benchmark of diverse LLM-based queries using Llama 3 models. Our solution also achieves a 32% cost savings under OpenAI and Anthropic pricing models.",True,True,"Liu, Shu and Biswal, Asim and Cheng, Audrey and Mo, Xiangxi and Cao, Shiyi and Gonzalez, Joseph E and Stoica, Ion and Zaharia, Matei",2024.0,,,,arXiv preprint arXiv:2403.05821,Optimizing LLM Queries in Relational Data Analytics Workloads,Optimizing LLM Queries in Relational Data Analytics Workloads,http://arxiv.org/pdf/2403.05821v2,"Batch data analytics is a growing application for Large Language Models (LLMs). LLMs enable users to perform a wide range of natural language tasks, such as classification, entity extraction, and translation, over large datasets. However, LLM inference is highly costly and slow: for example, an NVIDIA L4 GPU running Llama3-8B can only process 6 KB of text per second, taking about a day to handle 15 GB of data; processing a similar amount of data costs around $10K on OpenAI's GPT-4o. In this paper, we propose novel techniques that can significantly reduce the cost of LLM calls for relational data analytics workloads. Our key contribution is developing efficient algorithms for reordering the rows and the fields within each row of an input table to maximize key-value (KV) cache reuse when performing LLM serving. As such, our approach can be easily applied to existing analytics systems and serving platforms. Our evaluation shows that our solution can yield up to 3.4x improvement in job completion time on a benchmark of diverse LLM-based queries using Llama 3 models. Our solution also achieves a 32% cost savings under OpenAI and Anthropic pricing models." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,hubara2018quantized,\cite{hubara2018quantized},Quantized neural networks: Training neural networks with low precision weights and activations,,,True,False,"Hubara, Itay and Courbariaux, Matthieu and Soudry, Daniel and El-Yaniv, Ran and Bengio, Yoshua",2018.0,,,,Journal of Machine Learning Research,Quantized neural networks: Training neural networks with low precision weights and activations,Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations,http://arxiv.org/pdf/1609.07061v1,"We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,GPT3.int8,\cite{GPT3.int8},LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale,,,True,False,"Tim Dettmers and Mike Lewis and Younes Belkada and Luke Zettlemoyer",2022.0,,http://papers.nips.cc/paper\_files/paper/2022/hash/c3ba4962c05c49636d4c6206a97e9c8a-Abstract-Conference.html,,,LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale,LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale,http://arxiv.org/pdf/2208.07339v2,"Large language models have been widely adopted but require significant GPU memory for inference. We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for inference by half while retaining full precision performance. With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation. This is made possible by understanding and working around properties of highly systematic emergent features in transformer language models that dominate attention and transformer predictive performance. To cope with these features, we develop a two-part quantization procedure, LLM.int8(). We first use vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication, to quantize most of the features. However, for the emergent outliers, we also include a new mixed-precision decomposition scheme, which isolates the outlier feature dimensions into a 16-bit matrix multiplication while still more than 99.9% of values are multiplied in 8-bit. Using LLM.int8(), we show empirically it is possible to perform inference in LLMs with up to 175B parameters without any performance degradation. This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our software." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,Deja,\cite{Deja},Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time,http://arxiv.org/abs/2310.17157v1,"Large language models (LLMs) with hundreds of billions of parameters have sparked a new wave of exciting AI applications. However, they are computationally expensive at inference time. Sparsity is a natural approach to reduce this cost, but existing methods either require costly retraining, have to forgo LLM's in-context learning ability, or do not yield wall-clock time speedup on modern hardware. We hypothesize that contextual sparsity, which are small, input-dependent sets of attention heads and MLP parameters that yield approximately the same output as the dense model for a given input, can address these issues. We show that contextual sparsity exists, that it can be accurately predicted, and that we can exploit it to speed up LLM inference in wall-clock time without compromising LLM's quality or in-context learning ability. Based on these insights, we propose DejaVu, a system that uses a low-cost algorithm to predict contextual sparsity on the fly given inputs to each layer, along with an asynchronous and hardware-aware implementation that speeds up LLM inference. We validate that DejaVu can reduce the inference latency of OPT-175B by over 2X compared to the state-of-the-art FasterTransformer, and over 6X compared to the widely used Hugging Face implementation, without compromising model quality. The code is available at https://github.com/FMInference/DejaVu.",True,True,"Zichang Liu and Jue Wang and Tri Dao and Tianyi Zhou and Binhang Yuan and Zhao Song and Anshumali Shrivastava and Ce Zhang and Yuandong Tian and Christopher R{\'{e}} and Beidi Chen",2023.0,,https://proceedings.mlr.press/v202/liu23am.html,,,Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time,Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time,http://arxiv.org/pdf/2310.17157v1,"Large language models (LLMs) with hundreds of billions of parameters have sparked a new wave of exciting AI applications. However, they are computationally expensive at inference time. Sparsity is a natural approach to reduce this cost, but existing methods either require costly retraining, have to forgo LLM's in-context learning ability, or do not yield wall-clock time speedup on modern hardware. We hypothesize that contextual sparsity, which are small, input-dependent sets of attention heads and MLP parameters that yield approximately the same output as the dense model for a given input, can address these issues. We show that contextual sparsity exists, that it can be accurately predicted, and that we can exploit it to speed up LLM inference in wall-clock time without compromising LLM's quality or in-context learning ability. Based on these insights, we propose DejaVu, a system that uses a low-cost algorithm to predict contextual sparsity on the fly given inputs to each layer, along with an asynchronous and hardware-aware implementation that speeds up LLM inference. We validate that DejaVu can reduce the inference latency of OPT-175B by over 2X compared to the state-of-the-art FasterTransformer, and over 6X compared to the widely used Hugging Face implementation, without compromising model quality. The code is available at https://github.com/FMInference/DejaVu." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,SmoothQuant,\cite{SmoothQuant},"SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models",http://arxiv.org/abs/2211.10438v7,"Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG, Llama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. SmoothQuant enables serving 530B LLM within a single node. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs. Code is available at https://github.com/mit-han-lab/smoothquant.",True,True,"Guangxuan Xiao and Ji Lin and Micka{\""{e}}l Seznec and Hao Wu and Julien Demouth and Song Han",2023.0,,https://proceedings.mlr.press/v202/xiao23c.html,,,"SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models",SmoothQuant: Accurate and Efficient Post-Training Quantization for ...,https://arxiv.org/abs/2211.10438,"Image 2: arxiv logo>cs> arXiv:2211.10438 **arXiv:2211.10438** (cs) View a PDF of the paper titled SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, by Guangxuan Xiao and 5 other authors View a PDF of the paper titled SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, by Guangxuan Xiao and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,cnn_pruning,\cite{cnn_pruning},Pruning convolutional neural networks for resource efficient inference,,,True,False,"Molchanov, Pavlo and Tyree, Stephen and Karras, Tero and Aila, Timo and Kautz, Jan",2016.0,,,,arXiv preprint arXiv:1611.06440,Pruning convolutional neural networks for resource efficient inference,Pruning Convolutional Neural Networks for Resource Efficient Inference,http://arxiv.org/pdf/1611.06440v2,"We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,Pruner-Zero,\cite{Pruner-Zero},"Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models",,,True,False,"Peijie Dong and Lujun Li and Zhenheng Tang and Xiang Liu and Xinglin Pan and Qiang Wang and Xiaowen Chu",2024.0,,https://openreview.net/forum?id=1tRLxQzdep,,,"Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models",Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs,https://github.com/pprp/Pruner-Zero,"GitHub - pprp/Pruner-Zero: [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs | main.py | main.py | Update rest code at once | Jun 6, 2024 | | main_opt.py | main_opt.py | Update rest code at once | Jun 6, 2024 | **Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models** Extensive experiments on LLaMA and LLaMA-2 on language modeling and zero-shot tasks demonstrate that our Pruner-Zero obtains superior performance than SOTA post-training pruning methods. Below is an example command for pruning LLaMA-7B with Pruner-Zero, to achieve unstructured 50% sparsity. title={Pruner-Zero: Evolving Symbolic Pruning Metric from Scratch for Large Language Models}, [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,SparseGPT,\cite{SparseGPT},SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot,http://arxiv.org/abs/2301.00774v3,"We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. We can execute SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, in under 4.5 hours, and can reach 60% unstructured sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches. The code is available at: https://github.com/IST-DASLab/sparsegpt.",True,True,"Elias Frantar and Dan Alistarh",2023.0,,https://proceedings.mlr.press/v202/frantar23a.html,,,SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot,SparseGPT: Massive Language Models Can Be Accurately ...,https://arxiv.org/abs/2301.00774,by E Frantar · 2023 · Cited by 887 — We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot.See more "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,DistiLLM,\cite{DistiLLM},DistiLLM: Towards Streamlined Distillation for Large Language Models,http://arxiv.org/abs/2402.03898v2,"Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while preserving model capabilities. However, current KD methods for auto-regressive sequence models (e.g., large language models) suffer from missing a standardized objective function. Moreover, the recent use of student-generated outputs to address training-inference mismatches has significantly escalated computational costs. To tackle these issues, we introduce DistiLLM, a more effective and efficient KD framework for auto-regressive language models. DistiLLM comprises two components: (1) a novel skew Kullback-Leibler divergence loss, where we unveil and leverage its theoretical properties, and (2) an adaptive off-policy approach designed to enhance the efficiency in utilizing student-generated outputs. Extensive experiments, including instruction-following tasks, demonstrate the effectiveness of DistiLLM in building high-performing student models while achieving up to 4.3$\times$ speedup compared to recent KD methods.",True,True,"Jongwoo Ko and Sungnyun Kim and Tianyi Chen and Se{-}Young Yun",2024.0,,https://openreview.net/forum?id=lsHZNNoC7r,,,DistiLLM: Towards Streamlined Distillation for Large Language Models,DistiLLM: Towards Streamlined Distillation for Large Language Models,http://arxiv.org/pdf/2402.03898v2,"Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while preserving model capabilities. However, current KD methods for auto-regressive sequence models (e.g., large language models) suffer from missing a standardized objective function. Moreover, the recent use of student-generated outputs to address training-inference mismatches has significantly escalated computational costs. To tackle these issues, we introduce DistiLLM, a more effective and efficient KD framework for auto-regressive language models. DistiLLM comprises two components: (1) a novel skew Kullback-Leibler divergence loss, where we unveil and leverage its theoretical properties, and (2) an adaptive off-policy approach designed to enhance the efficiency in utilizing student-generated outputs. Extensive experiments, including instruction-following tasks, demonstrate the effectiveness of DistiLLM in building high-performing student models while achieving up to 4.3$\times$ speedup compared to recent KD methods." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,MiniLLM,\cite{MiniLLM},MiniLLM: Knowledge Distillation of Large Language Models,http://arxiv.org/abs/2306.08543v4,"Knowledge Distillation (KD) is a promising technique for reducing the high computational demand of large language models (LLMs). However, previous KD methods are primarily applied to white-box classification models or training small models to imitate black-box model APIs like ChatGPT. How to effectively distill the knowledge of white-box LLMs into small models is still under-explored, which becomes more important with the prosperity of open-source LLMs. In this work, we propose a KD approach that distills LLMs into smaller language models. We first replace the forward Kullback-Leibler divergence (KLD) objective in the standard KD approaches with reverse KLD, which is more suitable for KD on generative language models, to prevent the student model from overestimating the low-probability regions of the teacher distribution. Then, we derive an effective optimization approach to learn this objective. The student models are named MiniLLM. Extensive experiments in the instruction-following setting show that MiniLLM generates more precise responses with higher overall quality, lower exposure bias, better calibration, and higher long-text generation performance than the baselines. Our method is scalable for different model families with 120M to 13B parameters. Our code, data, and model checkpoints can be found in https://github.com/microsoft/LMOps/tree/main/minillm.",True,True,"Yuxian Gu and Li Dong and Furu Wei and Minlie Huang",2024.0,,https://openreview.net/forum?id=5h0qf7IBZZ,,,MiniLLM: Knowledge Distillation of Large Language Models,MiniLLM: Knowledge Distillation of Large Language Models,http://arxiv.org/pdf/2306.08543v4,"Knowledge Distillation (KD) is a promising technique for reducing the high computational demand of large language models (LLMs). However, previous KD methods are primarily applied to white-box classification models or training small models to imitate black-box model APIs like ChatGPT. How to effectively distill the knowledge of white-box LLMs into small models is still under-explored, which becomes more important with the prosperity of open-source LLMs. In this work, we propose a KD approach that distills LLMs into smaller language models. We first replace the forward Kullback-Leibler divergence (KLD) objective in the standard KD approaches with reverse KLD, which is more suitable for KD on generative language models, to prevent the student model from overestimating the low-probability regions of the teacher distribution. Then, we derive an effective optimization approach to learn this objective. The student models are named MiniLLM. Extensive experiments in the instruction-following setting show that MiniLLM generates more precise responses with higher overall quality, lower exposure bias, better calibration, and higher long-text generation performance than the baselines. Our method is scalable for different model families with 120M to 13B parameters. Our code, data, and model checkpoints can be found in https://github.com/microsoft/LMOps/tree/main/minillm." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,pytorch-1,\cite{pytorch-1},PyTorch: An Open Source Machine Learning Framework,,,True,False,{PyTorch Contributors},2024.0,,https://pytorch.org/,,,PyTorch: An Open Source Machine Learning Framework,PyTorch,https://en.wikipedia.org/wiki/PyTorch,"PyTorch is a machine learninglibrary based on the Torch library,[4][5][6] used for applications such as computer vision and natural language processing,[7] originally developed by Meta AI and now part of the Linux Foundation umbrella.[8][9][10][11] It is one of the most popular deep learning frameworks, alongside others such as TensorFlow,[12] offering free and open-source software released under the modified BSD license. Main article: Tensor (machine learning) Main article: Neural network (machine learning) PyTorch defines a module called nn (torch.nn) to describe neural networks and to support training. ""Why AI and machine learning researchers are beginning to embrace PyTorch"". ^""PyTorch 2.0 brings new fire to open-source machine learning"". ^""An Introduction to PyTorch – A Simple yet Powerful Deep Learning Library""." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,tensorflow,\cite{tensorflow},TensorFlow: An Open Source Machine Learning Framework for Everyone,,,True,False,{TensorFlow Contributors},2024.0,,https://www.tensorflow.org/,,,TensorFlow: An Open Source Machine Learning Framework for Everyone,tensorflow/tensorflow: An Open Source Machine Learning ...,https://github.com/tensorflow/tensorflow,"TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,deepspeed,\cite{deepspeed},DeepSpeed: Advancing the Science of AI Through Efficient Training of Large Models,,,True,False,{Microsoft DeepSpeed Team},2024.0,,https://www.deepspeed.ai/,,,DeepSpeed: Advancing the Science of AI Through Efficient Training of Large Models,DeepSpeed: Latest News,https://www.deepspeed.ai/,"DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,huggingface_transformers,\cite{huggingface_transformers},Transformers Documentation,,,True,False,{HuggingFace Team},2024.0,,https://huggingface.co/docs/transformers/index,,,Transformers Documentation,Transformers — transformers 3.0.2 documentation - Hugging Face,https://huggingface.co/transformers/v3.0.2/index.html,"Transformers is a state-of-the-art NLP library for Pytorch and TensorFlow 2.0, providing architectures for NLU and NLG with high performance and low barrier to" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,flexgen,\cite{flexgen},"FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU",http://arxiv.org/abs/2303.06865v2,"The high computational and memory requirements of large language model (LLM) inference make it feasible only with multiple high-end accelerators. Motivated by the emerging demand for latency-insensitive tasks with batched processing, this paper initiates the study of high-throughput LLM inference using limited resources, such as a single commodity GPU. We present FlexGen, a high-throughput generation engine for running LLMs with limited GPU memory. FlexGen can be flexibly configured under various hardware resource constraints by aggregating memory and computation from the GPU, CPU, and disk. By solving a linear programming problem, it searches for efficient patterns to store and access tensors. FlexGen further compresses the weights and the attention cache to 4 bits with negligible accuracy loss. These techniques enable FlexGen to have a larger space of batch size choices and thus significantly increase maximum throughput. As a result, when running OPT-175B on a single 16GB GPU, FlexGen achieves significantly higher throughput compared to state-of-the-art offloading systems, reaching a generation throughput of 1 token/s for the first time with an effective batch size of 144. On the HELM benchmark, FlexGen can benchmark a 30B model with a 16GB GPU on 7 representative sub-scenarios in 21 hours. The code is available at https://github.com/FMInference/FlexGen",True,True,"Sheng, Ying and Zheng, Lianmin and Yuan, Binhang and Li, Zhuohan and Ryabinin, Max and Chen, Beidi and Liang, Percy and R{\'e}, Christopher and Stoica, Ion and Zhang, Ce",2023.0,,,,,"FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU",[PDF] FlexGen: High-Throughput Generative Inference of Large Language ...,https://openreview.net/pdf?id=RRntzKrBTp,"FlexGen is a high-throughput engine for running LLMs with limited GPU memory, using GPU, CPU, and disk, and achieving 1 token/s throughput." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,Orca,\cite{Orca},"Orca: {A} Distributed Serving System for Transformer-Based Generative Models",,,True,False,"Gyeong{-}In Yu and Joo Seong Jeong and Geon{-}Woo Kim and Soojeong Kim and Byung{-}Gon Chun",2022.0,,https://www.usenix.org/conference/osdi22/presentation/yu,,,"Orca: {A} Distributed Serving System for Transformer-Based Generative Models",Orca: A Distributed Serving System for Transformer-Based ...,https://www.usenix.org/conference/osdi22/presentation/yu,"+ Poster Session # Orca: A Distributed Serving System for Transformer-Based Generative Models In this paper, we propose iteration-level scheduling, a new scheduling mechanism that schedules execution at the granularity of iteration (instead of request) where the scheduler invokes the execution engine to run only a single iteration of the model on the batch. iteration-level scheduling to a Transformer model at the same time, we suggest selective batching, which applies batching only to a selected set of operations. OSDI '22 Open Access Sponsored by NetApp USENIX is committed to Open Access to the research presented at our events. Support USENIX and our commitment to Open Access. title = {Orca: A Distributed Serving System for {Transformer-Based} Generative Models}, url = {https://www.usenix.org/conference/osdi22/presentation/yu}, " "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,zhao4,\cite{zhao4},A Reconfigurable 0.69-1.02 nJ/Classification Biomedical AI Processor for Intelligent Health Monitoring Devices,,,True,False,"Zhao, Yuanzhe and Wang, Yuheng and Wang, Zijian and Zhu, Yan and Martins, RP and Chan, Chi-Hang and Zhang, Minglei",2025.0,,,,,A Reconfigurable 0.69-1.02 nJ/Classification Biomedical AI Processor for Intelligent Health Monitoring Devices,4.5 BioAIP: A Reconfigurable Biomedical AI Processor with Adaptive ...,https://www.researchgate.net/publication/350171989_45_BioAIP_A_Reconfigurable_Biomedical_AI_Processor_with_Adaptive_Learning_for_Versatile_Intelligent_Health_Monitoring,A Reconfigurable 0.69-1.02nJ/Classification Biomedical AI Processor for Intelligent Health Monitoring Devices. Conference Paper. Apr 2025. Yuanzhe Zhao · Yuheng "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,zhao5,\cite{zhao5},A 28nm Value-Wise Hybrid-Domain Compute-in-Memory Macro with Heterogeneous Memory Fabric and Asynchronous Sparsity Manager,,,True,False,"Zhao, Yuanzhe and Wang, Yang and Wang, Yuheng and Xie, Heng and Zhu, Yan and Martins, RP and Chan, Chi-Hang and Yin, Shouyi and Zhang, Minglei",2025.0,,,,,A 28nm Value-Wise Hybrid-Domain Compute-in-Memory Macro with Heterogeneous Memory Fabric and Asynchronous Sparsity Manager,A 28nm Value-Wise Hybrid-Domain Compute-in-Memory ...,https://unpaywall.org/10.1109%2FCICC63670.2025.10982876,Abstract: Edge AI devices need to be energy-efficient and compact while maintaining sufficient accuracy. Compute-in-memory (CIM) is a promising approach to "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,zhao6,\cite{zhao6},A One-Shot Floating-Point Compute-in-Memory Macro Featuring PVT Robustness and Mismatch Tolerance for Edge LLMs,,,True,False,"Zhao, Yuanzhe and Xie, Heng and Wang, Zijian and Tian, Chunlin and Li, Li and Zhu, Yan and Martins, RP and Chan, Chi-Hang and Zhang, Minglei",2025.0,,,,,A One-Shot Floating-Point Compute-in-Memory Macro Featuring PVT Robustness and Mismatch Tolerance for Edge LLMs,A One-Shot Floating-Point Compute-in-Memory Macro ...,https://www.researchgate.net/publication/391898058_A_One-Shot_Floating-Point_Compute-in-Memory_Macro_Featuring_PVT_Robustness_and_Mismatch_Tolerance_for_Edge_LLMs,Robustness. Conference Paper. A One-Shot Floating-Point Compute-in-Memory Macro Featuring PVT Robustness and Mismatch Tolerance for Edge LLMs. "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,zhao7,\cite{zhao7},A 28-nm 3.32-nJ/Frame Compute-in-Memory CNN Processor With Layer Fusion for Always-on Applications,,,True,False,"Zhao, Yuanzhe and He, Pengyu and Zhu, Yan and Martins, Rui P and Chan, Chi-Hang and Zhang, Minglei",2025.0,,,,IEEE Transactions on Circuits and Systems I: Regular Papers,A 28-nm 3.32-nJ/Frame Compute-in-Memory CNN Processor With Layer Fusion for Always-on Applications,A 28-nm 3.32-nJ/Frame Compute-in-Memory CNN Processor With ...,https://ieeexplore.ieee.org/document/10902457/,This work presents an always-on CNN processor featuring compute-in-memory (CIM) and layer-fusion (LF) techniques. It demonstrates an end-to-end neural network "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,zhao8,\cite{zhao8},A Reconfigurable Floating-Point Compute-In-Memory With Analog Exponent Pre-Processes,,,True,False,"He, Pengyu and Zhao, Yuanzhe and Xie, Heng and Wang, Yang and Yin, Shouyi and Li, Li and Zhu, Yan and Martins, Rui P and Chan, Chi-Hang and Zhang, Minglei",2024.0,,,,IEEE Solid-State Circuits Letters,A Reconfigurable Floating-Point Compute-In-Memory With Analog Exponent Pre-Processes,A Reconfigurable Floating-Point Compute-in-Memory With ...,http://ieeexplore.ieee.org/document/10683795/,This letter presents a reconfigurable floating-point compute-in-memory (FP-CIM) macro that preprocesses the exponent in the analog domain.See more "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,tambe2021edgebert,\cite{tambe2021edgebert},"EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference",http://arxiv.org/abs/2011.14203v5,"Transformer-based language models such as BERT provide significant accuracy improvement for a multitude of natural language processing (NLP) tasks. However, their hefty computational and memory demands make them challenging to deploy to resource-constrained edge platforms with strict latency requirements. We present EdgeBERT, an in-depth algorithm-hardware co-design for latency-aware energy optimization for multi-task NLP. EdgeBERT employs entropy-based early exit predication in order to perform dynamic voltage-frequency scaling (DVFS), at a sentence granularity, for minimal energy consumption while adhering to a prescribed target latency. Computation and memory footprint overheads are further alleviated by employing a calibrated combination of adaptive attention span, selective network pruning, and floating-point quantization. Furthermore, in order to maximize the synergistic benefits of these algorithms in always-on and intermediate edge computing settings, we specialize a 12nm scalable hardware accelerator system, integrating a fast-switching low-dropout voltage regulator (LDO), an all-digital phase-locked loop (ADPLL), as well as, high-density embedded non-volatile memories (eNVMs) wherein the sparse floating-point bit encodings of the shared multi-task parameters are carefully stored. Altogether, latency-aware multi-task NLP inference acceleration on the EdgeBERT hardware system generates up to 7x, 2.5x, and 53x lower energy compared to the conventional inference without early stopping, the latency-unbounded early exit approach, and CUDA adaptations on an Nvidia Jetson Tegra X2 mobile GPU, respectively.",True,True,"Tambe, Thierry and Hooper, Coleman and Pentecost, Lillian and Jia, Tianyu and Yang, En-Yu and Donato, Marco and Sanh, Victor and Whatmough, Paul and Rush, Alexander M and Brooks, David and others",2021.0,,,,,"EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference",Sentence-Level Energy Optimizations for Latency-Aware ...,https://dl.acm.org/doi/10.1145/3466752.3480095,"We present EdgeBERT, an in-depth algorithm-hardware co-design for latency-aware energy optimizations for multi-task NLP." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,zhao2023approxcaliper,\cite{zhao2023approxcaliper},Approxcaliper: A programmable framework for application-aware neural network optimization,,,True,False,"Zhao, Yifan and Sharif, Hashim and Pao-Huang, Peter and Shah, Vatsin and Sivakumar, Arun Narenthiran and Valverde Gasparino, Mateus and Mahmoud, Abdulrahman and Zhao, Nathan and Adve, Sarita and Chowdhary, Girish and others",2023.0,,,,Proceedings of Machine Learning and Systems,Approxcaliper: A programmable framework for application-aware neural network optimization,ApproxCaliper: A Programmable Framework for ...,https://ma3mool.github.io/publication/mlsys23.html,"""ApproxCaliper: A Programmable Framework for Application-aware Neural Network Optimization,"" Sixth Conference on Machine Learning and Systems (MLSys), Miami," "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,liberis2023differentiable,\cite{liberis2023differentiable},Differentiable neural network pruning to enable smart applications on microcontrollers,,,True,False,"Liberis, Edgar and Lane, Nicholas D",2023.0,,,,"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies",Differentiable neural network pruning to enable smart applications on microcontrollers,Differentiable Neural Network Pruning to Enable Smart Applications ...,https://dl.acm.org/doi/abs/10.1145/3569468,"We present a differentiable structured pruning method for convolutional neural networks, which integrates a model's MCU-specific resource usage and parameter" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,zhao2024felix,\cite{zhao2024felix},Felix: Optimizing Tensor Programs with Gradient Descent,,,True,False,"Zhao, Yifan and Sharif, Hashim and Adve, Vikram and Misailovic, Sasa",2024.0,,,,,Felix: Optimizing Tensor Programs with Gradient Descent,Felix: Optimizing Tensor Programs with Gradient Descent,https://misailo.cs.illinois.edu/papers/felix-asplos24.pdf,Felix creates a differentiable space of tensor programs that is amenable to search by gradient descent. Felix applies continuous re- laxation on the space of "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,tam2024fedhybrid,\cite{tam2024fedhybrid},FedHybrid: Breaking the Memory Wall of Federated Learning via Hybrid Tensor Management,,,True,False,"Tam, Kahou and Tian, Chunlin and Li, Li and Zhao, Haikai and Xu, ChengZhong",2024.0,,,,,FedHybrid: Breaking the Memory Wall of Federated Learning via Hybrid Tensor Management,Breaking the Memory Wall of Federated Learning via Hybrid Tensor ...,https://www.researchgate.net/publication/385538059_FedHybrid_Breaking_the_Memory_Wall_of_Federated_Learning_via_Hybrid_Tensor_Management,FedHybrid: Breaking the Memory Wall of Federated Learning via Hybrid Tensor Management ... Wall for Heterogeneous Federated Learning via Progressive Training. "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,dvfsasplos,\cite{dvfsasplos},"Expanding Datacenter Capacity with {DVFS} Boosting: {A} safe and scalable deployment experience",,,True,False,"Leonardo Piga and Iyswarya Narayanan and Aditya Sundarrajan and Matt Skach and Qingyuan Deng and Biswadip Maity and Manoj Chakkaravarthy and Alison Huang and Abhishek Dhanotia and Parth Malani",2024.0,,https://doi.org/10.1145/3617232.3624853,10.1145/3617232.3624853,,"Expanding Datacenter Capacity with {DVFS} Boosting: {A} safe and scalable deployment experience",Expanding Datacenter Capacity with DVFS Boosting,https://www.researchgate.net/publication/379917331_Expanding_Datacenter_Capacity_with_DVFS_Boosting_A_safe_and_scalable_deployment_experience,"[35] DVFS Boosting was explored as a scalable and secure approach to enhance data center capacity, tackling power consumption, hardware heterogeneity, and ...See more" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,dfvs-4,\cite{dfvs-4},Improving {DVFS} in NoCs with Coherence Prediction,,,True,False,"Robert Hesse and Natalie D. Enright Jerger",2015.0,,https://doi.org/10.1145/2786572.2786595,10.1145/2786572.2786595,,Improving {DVFS} in NoCs with Coherence Prediction,Improving DVFS in NoCs with Coherence Prediction,https://dl.acm.org/doi/10.1145/2786572.2786595,"In this work, we propose to utilize highly predictable properties of cache-coherence communication to derive more specific and reliable NoC traffic predictions." "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,dvfs-2,\cite{dvfs-2},Variation-aware dynamic voltage/frequency scaling,,,True,False,"Sebastian Herbert and Diana Marculescu",2009.0,,https://doi.org/10.1109/HPCA.2009.4798265,10.1109/HPCA.2009.4798265,,Variation-aware dynamic voltage/frequency scaling,Variation-aware dynamic voltage/frequency scaling,https://ieeexplore.ieee.org/document/4798265/,by S Herbert · 2009 · Cited by 174 — Fine-grained dynamic voltage/frequency scaling (DVFS) is an important tool in managing the balance between power and performance in chip-multiprocessors. "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,dvfs-3,\cite{dvfs-3},"System level analysis of fast, per-core {DVFS} using on-chip switching regulators",,,True,False,"Wonyoung Kim and Meeta Sharma Gupta and Gu{-}Yeon Wei and David M. Brooks",2008.0,,https://doi.org/10.1109/HPCA.2008.4658633,10.1109/HPCA.2008.4658633,,"System level analysis of fast, per-core {DVFS} using on-chip switching regulators","System Level Analysis of Fast, Per-Core DVFS Using On- ...",https://www.slideserve.com/malise/system-level-analysis-of-fast-per-core-dvfs-using-on-chip-switching-regulators,"System Level Analysis of Fast, Per-Core DVFS Using On-Chip Switching Regulators. Wonyoung Kim, Meeta Gupta Prof. Gu-Yeon Wei, Prof. David Brooks" "CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge",2506.02847v1,bateni2020neuos,\cite{bateni2020neuos},$\{$NeuOS$\}$: A $\{$Latency-Predictable$\}$$\{$Multi-Dimensional$\}$ Optimization Framework for $\{$DNN-driven$\}$ Autonomous Systems,,,True,False,"Bateni, Soroush and Liu, Cong",2020.0,,,,,$\{$NeuOS$\}$: A $\{$Latency-Predictable$\}$$\{$Multi-Dimensional$\}$ Optimization Framework for $\{$DNN-driven$\}$ Autonomous Systems,Soroush Bateni - DBLP,https://dblp.org/pid/224/5652,NeuOS: A Latency-Predictable Multi-Dimensional Optimization Framework for DNN-driven Autonomous Systems. USENIX ATC 2020: 371-385. [i2]. view. Refining Datapath for Microscaling ViTs,2505.22194v1,lin2017accurate,\cite{lin2017accurate},Towards Accurate Binary Convolutional Neural Network,http://arxiv.org/abs/1711.11294v1,"We introduce a novel scheme to train binary convolutional neural networks (CNNs) -- CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",True,True,Xiaofan Lin and Cong Zhao and Wei Pan,2017.0,,,,,Towards Accurate Binary Convolutional Neural Network,Towards Accurate Binary Convolutional Neural Network,http://arxiv.org/pdf/1711.11294v1,"We introduce a novel scheme to train binary convolutional neural networks (CNNs) -- CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations." Refining Datapath for Microscaling ViTs,2505.22194v1,zhang2018lqnets,\cite{zhang2018lqnets},LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks,,,True,False,Dongqing Zhang and Jiaolong Yang and Dongqiangzi Ye and Gang Hua,2018.0,,,,,LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks,LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks,http://arxiv.org/pdf/1807.10029v1,"Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets" Refining Datapath for Microscaling ViTs,2505.22194v1,wu2018training,\cite{wu2018training},Training and Inference with Integers in Deep Neural Networks,http://arxiv.org/abs/1802.04680v1,"Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ""WAGE"" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.",True,True,"Wu, Shuang and Li, Guoqi and Chen, Feng and Shi, Luping",2018.0,,,,arXiv preprint arXiv:1802.04680,Training and Inference with Integers in Deep Neural Networks,Training and Inference with Integers in Deep Neural Networks,http://arxiv.org/pdf/1802.04680v1,"Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ""WAGE"" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands." Refining Datapath for Microscaling ViTs,2505.22194v1,krishnamoorthi2018quantizing,\cite{krishnamoorthi2018quantizing},Quantizing deep convolutional networks for efficient inference: A whitepaper,,,True,False,"Krishnamoorthi, Raghuraman",2018.0,,,,arXiv preprint arXiv:1806.08342,Quantizing deep convolutional networks for efficient inference: A whitepaper,Quantizing deep convolutional networks for efficient inference: A whitepaper,http://arxiv.org/pdf/1806.08342v1,"We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures. Model sizes can be reduced by a factor of 4 by quantizing weights to 8-bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights.We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks.We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks and review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits." Refining Datapath for Microscaling ViTs,2505.22194v1,dai2021vs,\cite{dai2021vs},Vs-quant: Per-vector scaled quantization for accurate low-precision neural network inference,,,True,False,"Dai, Steve and Venkatesan, Rangha and Ren, Mark and Zimmer, Brian and Dally, William and Khailany, Brucek",2021.0,,,,Proceedings of Machine Learning and Systems,Vs-quant: Per-vector scaled quantization for accurate low-precision neural network inference,[PDF] VS-Quant: Per-vector Scaled Quantization for Accurate Low ... - arXiv,https://arxiv.org/pdf/2102.04503,We find that per-vector scaling consistently achieves better inference accuracy at low precision compared to conventional scaling techniques for Refining Datapath for Microscaling ViTs,2505.22194v1,harma2022accuracy,\cite{harma2022accuracy},Accuracy Boosters: Epoch-Driven Mixed-Mantissa Block Floating-Point for DNN Training,,,True,False,"Harma, Simla Burcu and S{\""o}nmez, Canberk and Falsafi, Babak and Jaggi, Martin and Oh, Yunho",2022.0,,,,arXiv preprint arXiv:2211.10737,Accuracy Boosters: Epoch-Driven Mixed-Mantissa Block Floating-Point for DNN Training,[PDF] Epoch-Driven Mixed-Mantissa Block Floating Point for DNN Training,https://openreview.net/pdf?id=nfmfqzQ4Mwl,"Using analytic models, we show Accuracy Boosters enable increasing arithmetic density for an HBFP training accelerator by up to 21.3× compared to FP32 and up to" Refining Datapath for Microscaling ViTs,2505.22194v1,darvish2020pushing,\cite{darvish2020pushing},Pushing the limits of narrow precision inferencing at cloud scale with microsoft floating point,,,True,False,"Darvish Rouhani, Bita and Lo, Daniel and Zhao, Ritchie and Liu, Ming and Fowers, Jeremy and Ovtcharov, Kalin and Vinogradsky, Anna and Massengill, Sarah and Yang, Lita and Bittner, Ray and others",2020.0,,,,Advances in neural information processing systems,Pushing the limits of narrow precision inferencing at cloud scale with microsoft floating point,[PDF] Pushing the Limits of Narrow Precision Inferencing at Cloud Scale ...,https://proceedings.neurips.cc/paper/2020/file/747e32ab0fea7fbd2ad9ec03daa3f840-Paper.pdf,"In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom" Refining Datapath for Microscaling ViTs,2505.22194v1,darvish2023shared,\cite{darvish2023shared},"With Shared Microexponents, A Little Shifting Goes a Long Way",http://arxiv.org/abs/2302.08007v2,"This paper introduces Block Data Representations (BDR), a framework for exploring and evaluating a wide spectrum of narrow-precision formats for deep learning. It enables comparison of popular quantization standards, and through BDR, new formats based on shared microexponents (MX) are identified, which outperform other state-of-the-art quantization approaches, including narrow-precision floating-point and block floating-point. MX utilizes multiple levels of quantization scaling with ultra-fine scaling factors based on shared microexponents in the hardware. The effectiveness of MX is demonstrated on real-world models including large-scale generative pretraining and inferencing, and production-scale recommendation systems.",True,True,"Darvish Rouhani, Bita and Zhao, Ritchie and Elango, Venmugil and Shafipour, Rasoul and Hall, Mathew and Mesmakhosroshahi, Maral and More, Ankit and Melnick, Levi and Golub, Maximilian and Varatkar, Girish and others",2023.0,,,,,"With Shared Microexponents, A Little Shifting Goes a Long Way","With Shared Microexponents, A Little Shifting Goes a Long Way",http://arxiv.org/pdf/2302.08007v2,"This paper introduces Block Data Representations (BDR), a framework for exploring and evaluating a wide spectrum of narrow-precision formats for deep learning. It enables comparison of popular quantization standards, and through BDR, new formats based on shared microexponents (MX) are identified, which outperform other state-of-the-art quantization approaches, including narrow-precision floating-point and block floating-point. MX utilizes multiple levels of quantization scaling with ultra-fine scaling factors based on shared microexponents in the hardware. The effectiveness of MX is demonstrated on real-world models including large-scale generative pretraining and inferencing, and production-scale recommendation systems." Refining Datapath for Microscaling ViTs,2505.22194v1,andri2022going,\cite{andri2022going},"Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tile",http://arxiv.org/abs/2209.12982v1,"Most of today's computer vision pipelines are built around deep neural networks, where convolution operations require most of the generally high compute effort. The Winograd convolution algorithm computes convolutions with fewer MACs compared to the standard algorithm, reducing the operation count by a factor of 2.25x for 3x3 convolutions when using the version with 2x2-sized tiles $F_2$. Even though the gain is significant, the Winograd algorithm with larger tile sizes, i.e., $F_4$, offers even more potential in improving throughput and energy efficiency, as it reduces the required MACs by 4x. Unfortunately, the Winograd algorithm with larger tile sizes introduces numerical issues that prevent its use on integer domain-specific accelerators and higher computational overhead to transform input and output data between spatial and Winograd domains. To unlock the full potential of Winograd $F_4$, we propose a novel tap-wise quantization method that overcomes the numerical issues of using larger tiles, enabling integer-only inference. Moreover, we present custom hardware units that process the Winograd transformations in a power- and area-efficient way, and we show how to integrate such custom modules in an industrial-grade, programmable DSA. An extensive experimental evaluation on a large set of state-of-the-art computer vision benchmarks reveals that the tap-wise quantization algorithm makes the quantized Winograd $F_4$ network almost as accurate as the FP32 baseline. The Winograd-enhanced DSA achieves up to 1.85x gain in energy efficiency and up to 1.83x end-to-end speed-up for state-of-the-art segmentation and detection networks.",True,True,"Andri, Renzo and Bussolino, Beatrice and Cipolletta, Antonio and Cavigelli, Lukas and Wang, Zhe",2022.0,,,,,"Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tile",Tap-Wise Quantization for Efficient Inference on 4x4 Tile - arXiv,https://arxiv.org/abs/2209.12982,"The Winograd convolution algorithm computes convolutions with fewer MACs compared to the standard algorithm, reducing the operation count by a" Refining Datapath for Microscaling ViTs,2505.22194v1,song2020drq,\cite{song2020drq},Drq: dynamic region-based quantization for deep neural network acceleration,,,True,False,"Song, Zhuoran and Fu, Bangqi and Wu, Feiyang and Jiang, Zhaoming and Jiang, Li and Jing, Naifeng and Liang, Xiaoyao",2020.0,,,,,Drq: dynamic region-based quantization for deep neural network acceleration,DRQ Dynamic Region-based Quantization for Deep Neural Network ...,https://github.com/BirenResearch/AIChip_Paper_List/blob/master/notes/ISCA/DRQ%20Dynamic%20Region-based%20Quantization%20for%20Deep%20Neural%20Network%20Acceleration.md,Paper title: DRQ: Dynamic Region-based Quantization for Deep Neural Network Acceleration · Publication: ISCA'20 · Problem to solve: Quantification is an effective Refining Datapath for Microscaling ViTs,2505.22194v1,zadeh2022mokey,\cite{zadeh2022mokey},"Mokey: Enabling Narrow Fixed-Point Inference for Out-of-the-Box Floating-Point Transformer Models",http://arxiv.org/abs/2203.12758v1,"Increasingly larger and better Transformer models keep advancing state-of-the-art accuracy and capability for Natural Language Processing applications. These models demand more computational power, storage, and energy. Mokey reduces the footprint of state-of-the-art 32-bit or 16-bit floating-point transformer models by quantizing all values to 4-bit indexes into dictionaries of representative 16-bit fixed-point centroids. Mokey does not need fine-tuning, an essential feature as often the training resources or datasets are not available to many. Exploiting the range of values that naturally occur in transformer models, Mokey selects centroid values to also fit an exponential curve. This unique feature enables Mokey to replace the bulk of the original multiply-accumulate operations with narrow 3b fixed-point additions resulting in an area- and energy-efficient hardware accelerator design. Over a set of state-of-the-art transformer models, the Mokey accelerator delivers an order of magnitude improvements in energy efficiency over a Tensor Cores-based accelerator while improving performance by at least $4\times$ and as much as $15\times$ depending on the model and on-chip buffering capacity. Optionally, Mokey can be used as a memory compression assist for any other accelerator, transparently stashing wide floating-point or fixed-point activations or weights into narrow 4-bit indexes. Mokey proves superior to prior state-of-the-art quantization methods for Transformers.",True,True,"Zadeh, Ali Hadi and Mahmoud, Mostafa and Abdelhadi, Ameer and Moshovos, Andreas",2022.0,,,,,"Mokey: Enabling Narrow Fixed-Point Inference for Out-of-the-Box Floating-Point Transformer Models",Mokey: Enabling Narrow Fixed-Point Inference for Out-of-the-Box ...,https://arxiv.org/abs/2203.12758,Mokey reduces the footprint of state-of-the-art 32-bit or 16-bit floating-point transformer models by quantizing all values to 4-bit indexes into dictionaries. Refining Datapath for Microscaling ViTs,2505.22194v1,zhao2021cambricon,\cite{zhao2021cambricon},Cambricon-Q: A hybrid architecture for efficient training,,,True,False,"Zhao, Yongwei and Liu, Chang and Du, Zidong and Guo, Qi and Hu, Xing and Zhuang, Yimin and Zhang, Zhenxing and Song, Xinkai and Li, Wei and Zhang, Xishan and others",2021.0,,,,,Cambricon-Q: A hybrid architecture for efficient training,Cambricon-Q: A Hybrid Architecture for Efficient Training,https://www.computer.org/csdl/proceedings-article/isca/2021/333300a706/1vNjDWVoisw,Cambricon-Q features a hybrid architecture consisting of an ASIC acceleration core and a near-data-processing (NDP) engine. The acceleration core mainly targets Refining Datapath for Microscaling ViTs,2505.22194v1,wang2019haq,\cite{wang2019haq},HAQ: Hardware-Aware Automated Quantization with Mixed Precision,http://arxiv.org/abs/1811.08886v3,"Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, energy, and model size, which is both time-consuming and sub-optimal. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals (latency and energy) to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.",True,True,"Wang, Kuan and Liu, Zhijian and Lin, Yujun and Lin, Ji and Han, Song",2019.0,,,,,HAQ: Hardware-Aware Automated Quantization with Mixed Precision,HAQ: Hardware-Aware Automated Quantization with Mixed Precision,https://arxiv.org/abs/1811.08886,"In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the" Refining Datapath for Microscaling ViTs,2505.22194v1,dettmers2022llm,\cite{dettmers2022llm},Llm. int8 (): 8-bit matrix multiplication for transformers at scale,,,True,False,"Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke",2022.0,,,,arXiv preprint arXiv:2208.07339,Llm. int8 (): 8-bit matrix multiplication for transformers at scale,LLM.int8(): 8-bit matrix multiplication for transformers at scale,https://dl.acm.org/doi/10.5555/3600270.3602468,"We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for" Refining Datapath for Microscaling ViTs,2505.22194v1,frantar2022gptq,\cite{frantar2022gptq},"GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers",http://arxiv.org/abs/2210.17323v2,"Generative Pre-trained Transformer models, known as GPT or OPT, set themselves apart through breakthrough performance across complex language modelling tasks, but also by their extremely high computational and storage costs. Specifically, due to their massive size, even inference for large, highly-accurate GPT models may require multiple performant GPUs, which limits the usability of such models. While there is emerging work on relieving this pressure via model compression, the applicability and performance of existing compression techniques is limited by the scale and complexity of GPT models. In this paper, we address this challenge, and propose GPTQ, a new one-shot weight quantization method based on approximate second-order information, that is both highly-accurate and highly-efficient. Specifically, GPTQ can quantize GPT models with 175 billion parameters in approximately four GPU hours, reducing the bitwidth down to 3 or 4 bits per weight, with negligible accuracy degradation relative to the uncompressed baseline. Our method more than doubles the compression gains relative to previously-proposed one-shot quantization methods, preserving accuracy, allowing us for the first time to execute an 175 billion-parameter model inside a single GPU for generative inference. Moreover, we also show that our method can still provide reasonable accuracy in the extreme quantization regime, in which weights are quantized to 2-bit or even ternary quantization levels. We show experimentally that these improvements can be leveraged for end-to-end inference speedups over FP16, of around 3.25x when using high-end GPUs (NVIDIA A100) and 4.5x when using more cost-effective ones (NVIDIA A6000). The implementation is available at https://github.com/IST-DASLab/gptq.",True,True,"Frantar, Elias and Ashkboos, Saleh and Hoefler, Torsten and Alistarh, Dan",2022.0,,,,arXiv preprint arXiv:2210.17323,"GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers",[PDF] gptq: accurate post-training quantization - arXiv,https://arxiv.org/pdf/2210.17323,"Generative Pre-trained Transformer models, known as GPT or OPT, set them- selves apart through breakthrough performance across complex" Refining Datapath for Microscaling ViTs,2505.22194v1,dong2019hawq,\cite{dong2019hawq},HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision,http://arxiv.org/abs/1905.03696v1,"Model size and inference speed/power have become a major challenge in the deployment of Neural Networks for many applications. A promising approach to address these problems is quantization. However, uniformly quantizing a model to ultra low precision leads to significant accuracy degradation. A novel solution for this is to use mixed-precision quantization, as some parts of the network may allow lower precision as compared to other layers. However, there is no systematic way to determine the precision of different layers. A brute force approach is not feasible for deep networks, as the search space for mixed-precision is exponential in the number of layers. Another challenge is a similar factorial complexity for determining block-wise fine-tuning order when quantizing the model to a target precision. Here, we introduce Hessian AWare Quantization (HAWQ), a novel second-order quantization method to address these problems. HAWQ allows for the automatic selection of the relative quantization precision of each layer, based on the layer's Hessian spectrum. Moreover, HAWQ provides a deterministic fine-tuning order for quantizing layers, based on second-order information. We show the results of our method on Cifar-10 using ResNet20, and on ImageNet using Inception-V3, ResNet50 and SqueezeNext models. Comparing HAWQ with state-of-the-art shows that we can achieve similar/better accuracy with $8\times$ activation compression ratio on ResNet20, as compared to DNAS~\cite{wu2018mixed}, and up to $1\%$ higher accuracy with up to $14\%$ smaller models on ResNet50 and Inception-V3, compared to recently proposed methods of RVQuant~\cite{park2018value} and HAQ~\cite{wang2018haq}. Furthermore, we show that we can quantize SqueezeNext to just 1MB model size while achieving above $68\%$ top1 accuracy on ImageNet.",True,True,"Dong, Zhen and Yao, Zhewei and Gholami, Amir and Mahoney, Michael W and Keutzer, Kurt",2019.0,,,,,HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision,[PDF] Hessian AWare Quantization of Neural Networks With Mixed-Precision,https://www.stat.berkeley.edu/~mmahoney/pubs/HAWQ_ICCV_2019_paper.pdf,"HAWQ allows for the automatic se- lection of the relative quantization precision of each layer, based on the layer's Hessian spectrum. Moreover, HAWQ provides a" Refining Datapath for Microscaling ViTs,2505.22194v1,xiao2022smoothquant,\cite{xiao2022smoothquant},"SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models",http://arxiv.org/abs/2211.10438v7,"Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG, Llama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. SmoothQuant enables serving 530B LLM within a single node. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs. Code is available at https://github.com/mit-han-lab/smoothquant.",True,True,"Xiao, Guangxuan and Lin, Ji and Seznec, Mickael and Demouth, Julien and Han, Song",2022.0,,,,arXiv preprint arXiv:2211.10438,"SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models",SmoothQuant: Accurate and Efficient Post-Training Quantization for ...,https://arxiv.org/abs/2211.10438,"Image 2: arxiv logo>cs> arXiv:2211.10438 **arXiv:2211.10438** (cs) View a PDF of the paper titled SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, by Guangxuan Xiao and 5 other authors View a PDF of the paper titled SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, by Guangxuan Xiao and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " Refining Datapath for Microscaling ViTs,2505.22194v1,yao2022zeroquant,\cite{yao2022zeroquant},"ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers",http://arxiv.org/abs/2206.01861v1,"How to efficiently serve ever-larger trained natural language models in practice has become exceptionally challenging even for powerful cloud servers due to their prohibitive memory/computation requirements. In this work, we present an efficient and affordable post-training quantization approach to compress large Transformer-based models, termed as ZeroQuant. ZeroQuant is an end-to-end quantization and inference pipeline with three main components: (1) a fine-grained hardware-friendly quantization scheme for both weight and activations; (2) a novel affordable layer-by-layer knowledge distillation algorithm (LKD) even without the access to the original training data; (3) a highly-optimized quantization system backend support to remove the quantization/dequantization overhead. As such, we are able to show that: (1) ZeroQuant can reduce the precision for weights and activations to INT8 in a cost-free way for both BERT and GPT3-style models with minimal accuracy impact, which leads to up to 5.19x/4.16x speedup on those models compared to FP16 inference; (2) ZeroQuant plus LKD affordably quantize the weights in the fully-connected module to INT4 along with INT8 weights in the attention module and INT8 activations, resulting in 3x memory footprint reduction compared to the FP16 model; (3) ZeroQuant can be directly applied to two of the largest open-sourced language models, including GPT-J6B and GPT-NeoX20, for which our INT8 model achieves similar accuracy as the FP16 model but achieves up to 5.2x better efficiency.",True,True,"Yao, Zhewei and Yazdani Aminabadi, Reza and Zhang, Minjia and Wu, Xiaoxia and Li, Conglong and He, Yuxiong",2022.0,,,,Advances in Neural Information Processing Systems,"ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers",ZeroQuant: Efficient and Affordable Post-Training Quantization for ...,https://arxiv.org/abs/2206.01861,"In this work, we present an efficient and affordable post-training quantization approach to compress large Transformer-based models, termed as ZeroQuant." Refining Datapath for Microscaling ViTs,2505.22194v1,liu2023psq,\cite{liu2023psq},PSQ: An Automatic Search Framework for Data-Free Quantization on PIM-based Architecture,,,True,False,"Liu, Fangxin and Yang, Ning and Jiang, Li",2023.0,,,,,PSQ: An Automatic Search Framework for Data-Free Quantization on PIM-based Architecture,PSQ: An Automatic Search Framework for Data-Free Quantization ...,https://ieeexplore.ieee.org/servlet/Login?logout=/document/10361000,The scheme tightly combines the search principle of quantization and the PIM architecture to provide smooth hardware-friendly quantization. We leverage the Refining Datapath for Microscaling ViTs,2505.22194v1,liu2024spark,\cite{liu2024spark},SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding,,,True,False,"Liu, Fangxin and Yang, Ning and Li, Haomin and Wang, Zongwu and Song, Zhuoran and Pei, Songwen and Jiang, Li",2024.0,,,,,SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding,Scalable and Precision-Aware Acceleration of Neural ...,https://ieeexplore.ieee.org/document/10476472/,by F Liu · 2024 · Cited by 31 — SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding ; Article #: ; Date of Conference: 02-06 March 2024 ; Date Added to IEEE Refining Datapath for Microscaling ViTs,2505.22194v1,chang2021mix,\cite{chang2021mix},"Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework",http://arxiv.org/abs/2012.04240v2,"Deep Neural Networks (DNNs) have achieved extraordinary performance in various application domains. To support diverse DNN models, efficient implementations of DNN inference on edge-computing platforms, e.g., ASICs, FPGAs, and embedded systems, are extensively investigated. Due to the huge model size and computation amount, model compression is a critical step to deploy DNN models on edge devices. This paper focuses on weight quantization, a hardware-friendly model compression approach that is complementary to weight pruning. Unlike existing methods that use the same quantization scheme for all weights, we propose the first solution that applies different quantization schemes for different rows of the weight matrix. It is motivated by (1) the distribution of the weights in the different rows are not the same; and (2) the potential of achieving better utilization of heterogeneous FPGA hardware resources. To achieve that, we first propose a hardware-friendly quantization scheme named sum-of-power-of-2 (SP2) suitable for Gaussian-like weight distribution, in which the multiplication arithmetic can be replaced with logic shifter and adder, thereby enabling highly efficient implementations with the FPGA LUT resources. In contrast, the existing fixed-point quantization is suitable for Uniform-like weight distribution and can be implemented efficiently by DSP. Then to fully explore the resources, we propose an FPGA-centric mixed scheme quantization (MSQ) with an ensemble of the proposed SP2 and the fixed-point schemes. Combining the two schemes can maintain, or even increase accuracy due to better matching with weight distributions.",True,True,"Chang, Sung-En and Li, Yanyu and Sun, Mengshu and Shi, Runbin and So, Hayden K-H and Qian, Xuehai and Wang, Yanzhi and Lin, Xue",2021.0,,,,,"Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework",[PDF] A Novel FPGA-Centric Deep Neural Network Quantization Framework,https://par.nsf.gov/servlets/purl/10232486,"This paper proposes a DNN quantization framework that applies different quantization schemes for different weight matrix rows, using a hardware-friendly SP2" Refining Datapath for Microscaling ViTs,2505.22194v1,wu2023msd,\cite{wu2023msd},MSD: Mixing Signed Digit Representations for Hardware-efficient DNN Acceleration on FPGA with Heterogeneous Resources,,,True,False,"Wu, Jiajun and Zhou, Jiajun and Gao, Yizhao and Ding, Yuhao and Wong, Ngai and So, Hayden Kwok-Hay",2023.0,,,,,MSD: Mixing Signed Digit Representations for Hardware-efficient DNN Acceleration on FPGA with Heterogeneous Resources,MSD: Mixing Signed Digit Representations for Hardware-efficient ...,https://www.researchgate.net/publication/372264814_MSD_Mixing_Signed_Digit_Representations_for_Hardware-efficient_DNN_Acceleration_on_FPGA_with_Heterogeneous_Resources,MSD: Mixing Signed Digit Representations for Hardware-efficient DNN Acceleration on FPGA with Heterogeneous Resources ... effectively improve training efficiency Refining Datapath for Microscaling ViTs,2505.22194v1,sharma2018bit,\cite{sharma2018bit},"Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks",http://arxiv.org/abs/1712.01507v2,"Fully realizing the potential of acceleration for Deep Neural Networks (DNNs) requires understanding and leveraging algorithmic properties. This paper builds upon the algorithmic insight that bitwidth of operations in DNNs can be reduced without compromising their classification accuracy. However, to prevent accuracy loss, the bitwidth varies significantly across DNNs and it may even be adjusted for each layer. Thus, a fixed-bitwidth accelerator would either offer limited benefits to accommodate the worst-case bitwidth requirements, or lead to a degradation in final accuracy. To alleviate these deficiencies, this work introduces dynamic bit-level fusion/decomposition as a new dimension in the design of DNN accelerators. We explore this dimension by designing Bit Fusion, a bit-flexible accelerator, that constitutes an array of bit-level processing elements that dynamically fuse to match the bitwidth of individual DNN layers. This flexibility in the architecture enables minimizing the computation and the communication at the finest granularity possible with no loss in accuracy. We evaluate the benefits of BitFusion using eight real-world feed-forward and recurrent DNNs. The proposed microarchitecture is implemented in Verilog and synthesized in 45 nm technology. Using the synthesis results and cycle accurate simulation, we compare the benefits of Bit Fusion to two state-of-the-art DNN accelerators, Eyeriss and Stripes. In the same area, frequency, and process technology, BitFusion offers 3.9x speedup and 5.1x energy savings over Eyeriss. Compared to Stripes, BitFusion provides 2.6x speedup and 3.9x energy reduction at 45 nm node when BitFusion area and frequency are set to those of Stripes. Scaling to GPU technology node of 16 nm, BitFusion almost matches the performance of a 250-Watt Titan Xp, which uses 8-bit vector instructions, while BitFusion merely consumes 895 milliwatts of power.",True,True,"Sharma, Hardik and Park, Jongse and Suda, Naveen and Lai, Liangzhen and Chau, Benson and Kim, Joon Kyung and Chandra, Vikas and Esmaeilzadeh, Hadi",2018.0,,,,,"Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks",[PDF] Bit Fusion: Bit-Level Dynamically Composable Architecture for ...,https://bpb-us-w2.wpmucdn.com/sites.coecis.cornell.edu/dist/7/587/files/2023/06/Sharma_2018_Bit.pdf,RETROSPECTIVE: Bit Fusion: Bit-Level. Dynamically Composable Architecture for. Accelerating Deep Neural Networks. Hardik Sharma1. Jongse Park2. Naveen Suda3. Refining Datapath for Microscaling ViTs,2505.22194v1,fan2022adaptable,\cite{fan2022adaptable},"Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design",http://arxiv.org/abs/2209.09570v1,"Attention-based neural networks have become pervasive in many AI tasks. Despite their excellent algorithmic performance, the use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources, which often compromises their hardware performance. Although various sparse variants have been introduced, most approaches only focus on mitigating the quadratic scaling of attention on the algorithm level, without explicitly considering the efficiency of mapping their methods on real hardware designs. Furthermore, most efforts only focus on either the attention mechanism or the FFNs but without jointly optimizing both parts, causing most of the current designs to lack scalability when dealing with different input lengths. This paper systematically considers the sparsity patterns in different variants from a hardware perspective. On the algorithmic level, we propose FABNet, a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs. On the hardware level, a novel adaptable butterfly accelerator is proposed that can be configured at runtime via dedicated hardware control to accelerate different butterfly layers using a single unified hardware engine. On the Long-Range-Arena dataset, FABNet achieves the same accuracy as the vanilla Transformer while reducing the amount of computation by 10 to 66 times and the number of parameters 2 to 22 times. By jointly optimizing the algorithm and hardware, our FPGA-based butterfly accelerator achieves 14.2 to 23.2 times speedup over state-of-the-art accelerators normalized to the same computational budget. Compared with optimized CPU and GPU designs on Raspberry Pi 4 and Jetson Nano, our system is up to 273.8 and 15.1 times faster under the same power budget.",True,True,"Fan, Hongxiang and Chau, Thomas and Venieris, Stylianos I and Lee, Royson and Kouris, Alexandros and Luk, Wayne and Lane, Nicholas D and Abdelfattah, Mohamed S",2022.0,,,,,"Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design",Adaptable Butterfly Accelerator for Attention-based NNs via ...,https://ieeexplore.ieee.org/iel7/9923754/9923780/09923888.pdf,"by H Fan · 2022 · Cited by 74 — In this work, we address this challenge by adopting an algorithm and hardware co-design approach. On the algorithmic level, a hardware-friendly model called." Refining Datapath for Microscaling ViTs,2505.22194v1,ham20203,\cite{ham20203},"A$^3$: Accelerating Attention Mechanisms in Neural Networks with Approximation",http://arxiv.org/abs/2002.10941v1,"With the increasing computational demands of neural networks, many hardware accelerators for the neural networks have been proposed. Such existing neural network accelerators often focus on popular neural network types such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs); however, not much attention has been paid to attention mechanisms, an emerging neural network primitive that enables neural networks to retrieve most relevant information from a knowledge-base, external memory, or past states. The attention mechanism is widely adopted by many state-of-the-art neural networks for computer vision, natural language processing, and machine translation, and accounts for a large portion of total execution time. We observe today's practice of implementing this mechanism using matrix-vector multiplication is suboptimal as the attention mechanism is semantically a content-based search where a large portion of computations ends up not being used. Based on this observation, we design and architect A3, which accelerates attention mechanisms in neural networks with algorithmic approximation and hardware specialization. Our proposed accelerator achieves multiple orders of magnitude improvement in energy efficiency (performance/watt) as well as substantial speedup over the state-of-the-art conventional hardware.",True,True,"Ham, Tae Jun and Jung, Sung Jun and Kim, Seonghak and Oh, Young H and Park, Yeonhong and Song, Yoonho and Park, Jung-Hun and Lee, Sanghee and Park, Kyoung and Lee, Jae W and others",2020.0,,,,,"A$^3$: Accelerating Attention Mechanisms in Neural Networks with Approximation",A$^3$: Accelerating Attention Mechanisms in Neural Networks with ...,https://arxiv.org/abs/2002.10941,"Based on this observation, we design and architect A3, which accelerates attention mechanisms in neural networks with algorithmic approximation" Refining Datapath for Microscaling ViTs,2505.22194v1,ham2021elsa,\cite{ham2021elsa},"ELSA: Hardware-software co-design for efficient, lightweight self-attention mechanism in neural networks",,,True,False,"Ham, Tae Jun and Lee, Yejin and Seo, Seong Hoon and Kim, Soosung and Choi, Hyunji and Jung, Sung Jun and Lee, Jae W",2021.0,,,,,"ELSA: Hardware-software co-design for efficient, lightweight self-attention mechanism in neural networks","ELSA: Hardware-software Co-design for efficient, lightweight ...",https://s-space.snu.ac.kr/handle/10371/183738,"ELSA: Hardware-software Co-design for efficient, lightweight self-attention mechanism in neural networks. Cited 111 time in Web of Science Cited 133 time in" Refining Datapath for Microscaling ViTs,2505.22194v1,hong2022dfx,\cite{hong2022dfx},"DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation",http://arxiv.org/abs/2209.10797v1,"Transformer is a deep learning language model widely used for natural language processing (NLP) services in datacenters. Among transformer models, Generative Pre-trained Transformer (GPT) has achieved remarkable performance in text generation, or natural language generation (NLG), which needs the processing of a large input context in the summarization stage, followed by the generation stage that produces a single word at a time. The conventional platforms such as GPU are specialized for the parallel processing of large inputs in the summarization stage, but their performance significantly degrades in the generation stage due to its sequential characteristic. Therefore, an efficient hardware platform is required to address the high latency caused by the sequential characteristic of text generation. In this paper, we present DFX, a multi-FPGA acceleration appliance that executes GPT-2 model inference end-to-end with low latency and high throughput in both summarization and generation stages. DFX uses model parallelism and optimized dataflow that is model-and-hardware-aware for fast simultaneous workload execution among devices. Its compute cores operate on custom instructions and provide GPT-2 operations end-to-end. We implement the proposed hardware architecture on four Xilinx Alveo U280 FPGAs and utilize all of the channels of the high bandwidth memory (HBM) and the maximum number of compute resources for high hardware efficiency. DFX achieves 5.58x speedup and 3.99x energy efficiency over four NVIDIA V100 GPUs on the modern GPT-2 model. DFX is also 8.21x more cost-effective than the GPU appliance, suggesting that it is a promising solution for text generation workloads in cloud datacenters.",True,True,"Hong, Seongmin and Moon, Seungjae and Kim, Junsoo and Lee, Sungjae and Kim, Minsub and Lee, Dongsoo and Kim, Joo-Young",2022.0,,,,,"DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation",DFX: A Low-latency Multi-FPGA Appliance for Accelerating ...,https://ieeexplore.ieee.org/document/9895626,by S Hong · 2022 · Cited by 104 — DFX is a multi-FPGA appliance that accelerates transformer-based text generation. DFX adopts model parallelism to efficiently process the large-scale language Refining Datapath for Microscaling ViTs,2505.22194v1,kao2023flat,\cite{kao2023flat},FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks,http://arxiv.org/abs/2107.06419v7,"Attention mechanisms, primarily designed to capture pairwise correlations between words, have become the backbone of machine learning, expanding beyond natural language processing into other domains. This growth in adaptation comes at the cost of prohibitively large memory requirements and computational complexity, especially at higher number of input elements. This limitation is due to inherently limited data reuse opportunities and quadratic growth in memory footprints, leading to severe memory-boundedness and limited scalability of input elements. This work addresses these challenges by devising a tailored dataflow optimization, called FLAT, for attention mechanisms without altering their functionality. This dataflow processes costly attention operations through a unique fusion mechanism, transforming the memory footprint quadratic growth to merely a linear one. To realize the full potential of this bespoke mechanism, we propose a tiling approach to enhance the data reuse across attention operations. Our method both mitigates the off-chip bandwidth bottleneck as well as reduces the on-chip memory requirement. FLAT delivers 1.94x (1.76x) speedup and 49% and (42%) of energy savings compared to the state-of-the-art Edge (Cloud) accelerators with no customized dataflow optimization. When on-chip resources are scarce (20 KB-200 KB), FLAT yields, on average, 1.5x end-to-end latency reduction across a diverse range of conventional attention-based models with input sequence lengths ranging from 512-token to 64K-token. Our evaluations demonstrate that state-of-the-art DNN dataflow applied to attention operations reach the efficiency limit for inputs above 512 elements. In contrast, FLAT unblocks transformer models for inputs with up to 64K elements",True,True,"Kao, Sheng-Chun and Subramanian, Suvinay and Agrawal, Gaurav and Yazdanbakhsh, Amir and Krishna, Tushar",2023.0,,,,,FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks,FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks,http://arxiv.org/pdf/2107.06419v7,"Attention mechanisms, primarily designed to capture pairwise correlations between words, have become the backbone of machine learning, expanding beyond natural language processing into other domains. This growth in adaptation comes at the cost of prohibitively large memory requirements and computational complexity, especially at higher number of input elements. This limitation is due to inherently limited data reuse opportunities and quadratic growth in memory footprints, leading to severe memory-boundedness and limited scalability of input elements. This work addresses these challenges by devising a tailored dataflow optimization, called FLAT, for attention mechanisms without altering their functionality. This dataflow processes costly attention operations through a unique fusion mechanism, transforming the memory footprint quadratic growth to merely a linear one. To realize the full potential of this bespoke mechanism, we propose a tiling approach to enhance the data reuse across attention operations. Our method both mitigates the off-chip bandwidth bottleneck as well as reduces the on-chip memory requirement. FLAT delivers 1.94x (1.76x) speedup and 49% and (42%) of energy savings compared to the state-of-the-art Edge (Cloud) accelerators with no customized dataflow optimization. When on-chip resources are scarce (20 KB-200 KB), FLAT yields, on average, 1.5x end-to-end latency reduction across a diverse range of conventional attention-based models with input sequence lengths ranging from 512-token to 64K-token. Our evaluations demonstrate that state-of-the-art DNN dataflow applied to attention operations reach the efficiency limit for inputs above 512 elements. In contrast, FLAT unblocks transformer models for inputs with up to 64K elements" Refining Datapath for Microscaling ViTs,2505.22194v1,li2020ftrans,\cite{li2020ftrans},FTRANS: Energy-Efficient Acceleration of Transformers using FPGA,http://arxiv.org/abs/2007.08563v1,"In natural language processing (NLP), the ""Transformer"" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using sequence-aligned recurrent neural networks (RNNs) or convolution, and it achieved significant improvements for sequence to sequence tasks. The introduced intensive computation and storage of these pre-trained language representations has impeded their popularity into computation and memory-constrained devices. The field-programmable gate array (FPGA) is widely used to accelerate deep learning algorithms for its high parallelism and low latency. However, the trained models are still too large to accommodate to an FPGA fabric. In this paper, we propose an efficient acceleration framework, Ftrans, for transformer-based large scale language representations. Our framework includes enhanced block-circulant matrix (BCM)-based weight representation to enable model compression on large-scale language representations at the algorithm level with few accuracy degradation, and an acceleration design at the architecture level. Experimental results show that our proposed framework significantly reduces the model size of NLP models by up to 16 times. Our FPGA design achieves 27.07x and 81x improvement in performance and energy efficiency compared to CPU, and up to 8.80x improvement in energy efficiency compared to GPU.",True,True,"Li, Bingbing and Pandey, Santosh and Fang, Haowen and Lyv, Yanjun and Li, Ji and Chen, Jieyang and Xie, Mimi and Wan, Lipeng and Liu, Hang and Ding, Caiwen",2020.0,,,,,FTRANS: Energy-Efficient Acceleration of Transformers using FPGA,[PDF] FTRANS: Energy-Efficient Acceleration of Transformers using FPGA,https://scispace.com/pdf/ftrans-energy-efficient-acceleration-of-transformers-using-4ipjn26xe9.pdf,"In this paper, we propose an energy-efficient acceleration frame- work, Ftrans, for transformer-based large scale language repre- sentations using FPGA. Ftrans" Refining Datapath for Microscaling ViTs,2505.22194v1,lu2021sanger,\cite{lu2021sanger},Sanger: A co-design framework for enabling sparse attention using reconfigurable architecture,,,True,False,"Lu, Liqiang and Jin, Yicheng and Bi, Hangrui and Luo, Zizhang and Li, Peng and Wang, Tao and Liang, Yun",2021.0,,,,,Sanger: A co-design framework for enabling sparse attention using reconfigurable architecture,hatsu3/Sanger - GitHub,https://github.com/hatsu3/Sanger,This repository implements the proposed framework in the paper Sanger: A Co-Design Framework for Enabling Sparse Attention using Reconfigurable Architecture ( Refining Datapath for Microscaling ViTs,2505.22194v1,zadeh2020gobo,\cite{zadeh2020gobo},"GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference",http://arxiv.org/abs/2005.03842v2,"Attention-based models have demonstrated remarkable success in various natural language understanding tasks. However, efficient execution remains a challenge for these models which are memory-bound due to their massive number of parameters. We present GOBO, a model quantization technique that compresses the vast majority (typically 99.9%) of the 32-bit floating-point parameters of state-of-the-art BERT models and their variants to 3 bits while maintaining their accuracy. Unlike other quantization methods, GOBO does not require fine-tuning nor retraining to compensate for the quantization error. We present two practical hardware applications of GOBO. In the first GOBO reduces memory storage and traffic and as a result inference latency and energy consumption. This GOBO memory compression mechanism is plug-in compatible with many architectures; we demonstrate it with the TPU, Eyeriss, and an architecture using Tensor Cores-like units. Second, we present a co-designed hardware architecture that also reduces computation. Uniquely, the GOBO architecture maintains most of the weights in 3b even during computation, a property that: (1) makes the processing elements area efficient, allowing us to pack more compute power per unit area, (2) replaces most multiply-accumulations with additions, and (3) reduces the off-chip traffic by amplifying on-chip memory capacity.",True,True,"Zadeh, Ali Hadi and Edo, Isak and Awad, Omar Mohamed and Moshovos, Andreas",2020.0,,,,,"GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference",[PDF] GOBO: Quantizing Attention-Based NLP Models for Low Latency ...,https://microarch.org/micro53/papers/738300a811.pdf,"GOBO is a model quantization technique that compresses 99.9% of NLP model parameters to 3 bits, maintaining accuracy without fine-tuning." Refining Datapath for Microscaling ViTs,2505.22194v1,tambe2021edgebert,\cite{tambe2021edgebert},"EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference",http://arxiv.org/abs/2011.14203v5,"Transformer-based language models such as BERT provide significant accuracy improvement for a multitude of natural language processing (NLP) tasks. However, their hefty computational and memory demands make them challenging to deploy to resource-constrained edge platforms with strict latency requirements. We present EdgeBERT, an in-depth algorithm-hardware co-design for latency-aware energy optimization for multi-task NLP. EdgeBERT employs entropy-based early exit predication in order to perform dynamic voltage-frequency scaling (DVFS), at a sentence granularity, for minimal energy consumption while adhering to a prescribed target latency. Computation and memory footprint overheads are further alleviated by employing a calibrated combination of adaptive attention span, selective network pruning, and floating-point quantization. Furthermore, in order to maximize the synergistic benefits of these algorithms in always-on and intermediate edge computing settings, we specialize a 12nm scalable hardware accelerator system, integrating a fast-switching low-dropout voltage regulator (LDO), an all-digital phase-locked loop (ADPLL), as well as, high-density embedded non-volatile memories (eNVMs) wherein the sparse floating-point bit encodings of the shared multi-task parameters are carefully stored. Altogether, latency-aware multi-task NLP inference acceleration on the EdgeBERT hardware system generates up to 7x, 2.5x, and 53x lower energy compared to the conventional inference without early stopping, the latency-unbounded early exit approach, and CUDA adaptations on an Nvidia Jetson Tegra X2 mobile GPU, respectively.",True,True,"Tambe, Thierry and Hooper, Coleman and Pentecost, Lillian and Jia, Tianyu and Yang, En-Yu and Donato, Marco and Sanh, Victor and Whatmough, Paul and Rush, Alexander M. and Brooks, David and Wei, Gu-Yeon",2021.0,,https://doi.org/10.1145/3466752.3480095,10.1145/3466752.3480095,,"EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference",Sentence-Level Energy Optimizations for Latency-Aware ...,https://dl.acm.org/doi/10.1145/3466752.3480095,"We present EdgeBERT, an in-depth algorithm-hardware co-design for latency-aware energy optimizations for multi-task NLP." Refining Datapath for Microscaling ViTs,2505.22194v1,qin2023fact,\cite{qin2023fact},FACT: FFN-attention Co-optimized transformer architecture with eager correlation prediction,,,True,False,"Qin, Yubin and Wang, Yang and Deng, Dazheng and Zhao, Zhiren and Yang, Xiaolong and Liu, Leibo and Wei, Shaojun and Hu, Yang and Yin, Shouyi",2023.0,,,,,FACT: FFN-attention Co-optimized transformer architecture with eager correlation prediction,FACT: FFN-Attention Co-optimized Transformer Architecture with ...,https://dl.acm.org/doi/pdf/10.1145/3579371.3589057,We first propose an eager prediction algorithm which predicts the attention matrix before QKV generation. It fur- ther detects the unnecessary Refining Datapath for Microscaling ViTs,2505.22194v1,zeng2024flightllm,\cite{zeng2024flightllm},"FlightLLM: Efficient Large Language Model Inference with a Complete Mapping Flow on FPGAs",http://arxiv.org/abs/2401.03868v2,"Transformer-based Large Language Models (LLMs) have made a significant impact on various domains. However, LLMs' efficiency suffers from both heavy computation and memory overheads. Compression techniques like sparsification and quantization are commonly used to mitigate the gap between LLM's computation/memory overheads and hardware capacity. However, existing GPU and transformer-based accelerators cannot efficiently process compressed LLMs, due to the following unresolved challenges: low computational efficiency, underutilized memory bandwidth, and large compilation overheads. This paper proposes FlightLLM, enabling efficient LLMs inference with a complete mapping flow on FPGAs. In FlightLLM, we highlight an innovative solution that the computation and memory overhead of LLMs can be solved by utilizing FPGA-specific resources (e.g., DSP48 and heterogeneous memory hierarchy). We propose a configurable sparse DSP chain to support different sparsity patterns with high computation efficiency. Second, we propose an always-on-chip decode scheme to boost memory bandwidth with mixed-precision support. Finally, to make FlightLLM available for real-world LLMs, we propose a length adaptive compilation method to reduce the compilation overhead. Implemented on the Xilinx Alveo U280 FPGA, FlightLLM achieves 6.0$\times$ higher energy efficiency and 1.8$\times$ better cost efficiency against commercial GPUs (e.g., NVIDIA V100S) on modern LLMs (e.g., LLaMA2-7B) using vLLM and SmoothQuant under the batch size of one. FlightLLM beats NVIDIA A100 GPU with 1.2$\times$ higher throughput using the latest Versal VHK158 FPGA.",True,True,"Zeng, Shulin and Liu, Jun and Dai, Guohao and Yang, Xinhao and Fu, Tianyu and Wang, Hongyi and Ma, Wenheng and Sun, Hanbo and Li, Shiyao and Huang, Zixiao and others",2024.0,,,,arXiv preprint arXiv:2401.03868,"FlightLLM: Efficient Large Language Model Inference with a Complete Mapping Flow on FPGAs",FlightLLM: Efficient Large Language Model Inference with a ...,https://dl.acm.org/doi/10.1145/3626202.3637562,"This paper proposes FlightLLM, enabling efficient LLMs inference with a complete mapping flow on FPGAs." Refining Datapath for Microscaling ViTs,2505.22194v1,li2022auto,\cite{li2022auto},Auto-vit-acc: An fpga-aware automatic acceleration framework for vision transformer with mixed-scheme quantization,,,True,False,"Li, Zhengang and Sun, Mengshu and Lu, Alec and Ma, Haoyu and Yuan, Geng and Xie, Yanyue and Tang, Hao and Li, Yanyu and Leeser, Miriam and Wang, Zhangyang and others",2022.0,,,,,Auto-vit-acc: An fpga-aware automatic acceleration framework for vision transformer with mixed-scheme quantization,[PDF] Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework ...,https://www.sfu.ca/~zhenman/files/C25-FPL2022-Auto-ViT-Acc.pdf,Missing: 04/08/2025 Refining Datapath for Microscaling ViTs,2505.22194v1,dong2023heatvit,\cite{dong2023heatvit},"HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers",http://arxiv.org/abs/2211.08110v2,"While vision transformers (ViTs) have continuously achieved new milestones in the field of computer vision, their sophisticated network architectures with high computation and memory costs have impeded their deployment on resource-limited edge devices. In this paper, we propose a hardware-efficient image-adaptive token pruning framework called HeatViT for efficient yet accurate ViT acceleration on embedded FPGAs. By analyzing the inherent computational patterns in ViTs, we first design an effective attention-based multi-head token selector, which can be progressively inserted before transformer blocks to dynamically identify and consolidate the non-informative tokens from input images. Moreover, we implement the token selector on hardware by adding miniature control logic to heavily reuse existing hardware components built for the backbone ViT. To improve the hardware efficiency, we further employ 8-bit fixed-point quantization, and propose polynomial approximations with regularization effect on quantization error for the frequently used nonlinear functions in ViTs. Finally, we propose a latency-aware multi-stage training strategy to determine the transformer blocks for inserting token selectors and optimize the desired (average) pruning rates for inserted token selectors, in order to improve both the model accuracy and inference latency on hardware. Compared to existing ViT pruning studies, under the similar computation cost, HeatViT can achieve 0.7%$\sim$8.9% higher accuracy; while under the similar model accuracy, HeatViT can achieve more than 28.4%$\sim$65.3% computation reduction, for various widely used ViTs, including DeiT-T, DeiT-S, DeiT-B, LV-ViT-S, and LV-ViT-M, on the ImageNet dataset. Compared to the baseline hardware accelerator, our implementations of HeatViT on the Xilinx ZCU102 FPGA achieve 3.46$\times$$\sim$4.89$\times$ speedup.",True,True,"Dong, Peiyan and Sun, Mengshu and Lu, Alec and Xie, Yanyue and Liu, Kenneth and Kong, Zhenglun and Meng, Xin and Li, Zhengang and Lin, Xue and Fang, Zhenman and others",2023.0,,,,,"HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers",Hardware-Efficient Adaptive Token Pruning for Vision Transformers,https://ieeexplore.ieee.org/iel7/10070856/10070923/10071047.pdf,"HeatViT is a hardware-efficient token pruning framework for ViTs, using a token selector to reduce token number and package non-informative tokens." Refining Datapath for Microscaling ViTs,2505.22194v1,huang2023integer,\cite{huang2023integer},An Integer-Only and Group-Vector Systolic Accelerator for Efficiently Mapping Vision Transformer on Edge,,,True,False,"Huang, Mingqiang and Luo, Junyi and Ding, Chenchen and Wei, Zikun and Huang, Sixiao and Yu, Hao",2023.0,,,,IEEE Transactions on Circuits and Systems I: Regular Papers,An Integer-Only and Group-Vector Systolic Accelerator for Efficiently Mapping Vision Transformer on Edge,An Integer-Only and Group-Vector Systolic Accelerator for ...,https://colab.ws/articles/10.1109%2Ftcsi.2023.3312775,"Therefore, in this work, we propose the ViA, a novel vision transformer (ViT) accelerator architecture based on FPGA, to execute the transformer" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,DBLP:conf/rtss/BrandenburgG16,\cite{DBLP:conf/rtss/BrandenburgG16},"Global Scheduling Not Required: Simple, Near-Optimal Multiprocessor Real-Time Scheduling with Semi-Partitioned Reservations",,,True,False,"Bj{\""{o}}rn B. Brandenburg and Mahircan Gul",2016.0,,,,,"Global Scheduling Not Required: Simple, Near-Optimal Multiprocessor Real-Time Scheduling with Semi-Partitioned Reservations","[PDF] Simple, Near-Optimal Multiprocessor Real-Time Scheduling with ...",https://people.mpi-sws.org/~bbb/papers/pdf/rtss16b.pdf,Missing: 04/08/2025 "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,ekberg2021partitioned,\cite{ekberg2021partitioned},Partitioned Scheduling of Recurrent Real-Time Tasks,,,True,False,"Ekberg, Pontus and Baruah, Sanjoy",2021.0,,,,,Partitioned Scheduling of Recurrent Real-Time Tasks,Partitioned Scheduling of Recurrent Real-Time Tasks,https://user.it.uu.se/~ponek616/files/RTSS21/RTSS21.pdf,"by P Ekberg · Cited by 11 — Under the partitioned paradigm of multiprocessor scheduling for recurrent tasks, each task is pre-assigned to an individual processor and all jobs generated by" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Burchard:1995,\cite{Burchard:1995},New strategies for assigning real-time tasks to multiprocessor systems,,,True,False,"Burchard, Almut and Liebeherr, Jörg and Oh, Yingfeng and Son, Sang H.",1995.0,,,,IEEE Transactions on Computers,New strategies for assigning real-time tasks to multiprocessor systems,New Strategies for Assigning Real-Time Tasks to ...,https://www.computer.org/csdl/journal/tc/1995/12/t1429/13rRUwd9CF8,by J Liebeherr · 1995 · Cited by 378 — There are two strategies for scheduling real-time tasks on a multiprocessor system. In a global scheme each occurrence of a real-time task may be executed on a "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Dhall:1978,\cite{Dhall:1978},On a real-time scheduling problem,,,True,False,"Dhall, Sudarshan K and Liu, Chung Laung",1978.0,,,,Operations research,On a real-time scheduling problem,On a Real-Time Scheduling Problem | Operations Research,https://pubsonline.informs.org/doi/10.1287/opre.26.1.127,"The scheduling problem is to specify an order in which the requests of a set of tasks are to be executed and the processor to be used, with the goal of meeting" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Baruah:2005,\cite{Baruah:2005},The partitioned multiprocessor scheduling of sporadic task systems,,,True,False,"Baruah, Sanjoy and Fisher, Nathan",2005.0,,,,,The partitioned multiprocessor scheduling of sporadic task systems,The partitioned multiprocessor scheduling of sporadic task ...,https://ieeexplore.ieee.org/document/1563119/,por S Baruah · 2005 · Mencionado por 204 — A polynomial-time algorithm is presented for partitioning a collection of sporadic tasks among the processors of an identical multiprocessor platform. "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Lopez:2000,\cite{Lopez:2000},Worst-case utilization bound for EDF scheduling on real-time multiprocessor systems,,,True,False,"José María López and Garcia, Manuel and Díaz, Jose and Garcia, Frk Daniel",2000.0,,,,,Worst-case utilization bound for EDF scheduling on real-time multiprocessor systems,EDF Scheduling on Heterogeneous Multiprocessors,https://research.engineering.wustl.edu/~baruah/DISSERTATIONS/01funk.pdf,"by SH Funk · 2004 · Cited by 57 — Worst- case utilization bound for EDF scheduling on real-time multiprocessor systems. In Proceedings of the EuroMicro Conference on Real-Time Systems, pages" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Fisher:2006,\cite{Fisher:2006},The partitioned multiprocessor scheduling of non-preemptive sporadic task systems,,,True,False,"Fisher, Nathan and Baruah, Sanjoy",2006.0,,,,,The partitioned multiprocessor scheduling of non-preemptive sporadic task systems,The Partitioned Multiprocessor Scheduling of Sporadic Task Systems,https://fishern.eng.wayne.edu/papers/2005-baruahFisher-RTSS.pdf,"On multiprocessor systems, two alternative paradigms for scheduling collections of sporadic tasks have been considered: partitioned and global scheduling. In the partitioned approach, the tasks are statically partitioned among the processors, i.e., each task is assigned to a processor and is always executed on it ." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Senoussaoui:2020,\cite{Senoussaoui:2020},Allocation of Real-Time Tasks onto Identical Core Platforms under Deferred fixed Preemption-Point Model,,,True,False,"Senoussaoui, Ikram and Zahaf, Houssam-Eddine and Benhaoua, Mohammed Kamel and Lipari, Giuseppe and Olejnik, Richard",2020.0,,,10.1145/3394810.3394821,,Allocation of Real-Time Tasks onto Identical Core Platforms under Deferred fixed Preemption-Point Model,[PDF] Allocation of Real-Time Tasks onto Identical Core Platforms ... - HAL,https://hal.science/hal-02886816/document,Missing: 04/08/2025 "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,fonseca2016response,\cite{fonseca2016response},Response time analysis of sporadic DAG tasks under partitioned scheduling,,,True,False,"Fonseca, Jos{\'e} and Nelissen, Geoffrey and Nelis, Vincent and Pinho, Lu{\'\i}s Miguel",2016.0,,,,,Response time analysis of sporadic DAG tasks under partitioned scheduling,Response time analysis of sporadic DAG tasks under partitioned ...,https://ieeexplore.ieee.org/document/7509443,"More precisely, we present a response time analysis for sporadic DAG tasks atop multiprocessors under partitioned fixed-priority scheduling. We assume the" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,casini2018partitioned,\cite{casini2018partitioned},Partitioned fixed-priority scheduling of parallel tasks without preemptions,,,True,False,"Casini, Daniel and Biondi, Alessandro and Nelissen, Geoffrey and Buttazzo, Giorgio",2018.0,,,,,Partitioned fixed-priority scheduling of parallel tasks without preemptions,Partitioned Fixed-Priority Scheduling of Parallel Tasks ...,http://ieeexplore.ieee.org/document/8603232/,Abstract: The study of parallel task models executed with predictable scheduling approaches is a fundamental problem for real-time multiprocessor systems. "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Zahaf:2020,\cite{Zahaf:2020},"Preemption-Aware Allocation, Deadline Assignment for Conditional DAGs on Partitioned EDF",,,True,False,"Zahaf, Houssam-Eddine and Lipari, Giuseppe and Niar, Smail and Hassan Benyamina, Abou El",2020.0,,,10.1109/RTCSA50079.2020.9203643,,"Preemption-Aware Allocation, Deadline Assignment for Conditional DAGs on Partitioned EDF","Preemption-Aware Allocation, Deadline Assignment for Conditional ...",https://www.computer.org/csdl/proceedings-article/rtcsa/2020/09203643/1nkD7ZL5ycE,"Preemption-Aware Allocation, Deadline Assignment for Conditional DAGs on Partitioned EDF. 2020, pp. 1-10,. DOI Bookmark: 10.1109/RTCSA50079.2020.9203643." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Ueter:2021,\cite{Ueter:2021},{Hard Real-Time Stationary GANG-Scheduling},,,True,False,"Ueter, Niklas and G\""{u}nzel, Mario and von der Br\""{u}ggen, Georg and Chen, Jian-Jia",2021.0,,,10.4230/LIPIcs.ECRTS.2021.10,,{Hard Real-Time Stationary GANG-Scheduling},[PDF] Hard Real-Time Stationary GANG-Scheduling - DROPS,https://drops.dagstuhl.de/storage/00lipics/lipics-vol196-ecrts2021/LIPIcs.ECRTS.2021.10/LIPIcs.ECRTS.2021.10.pdf,"Contributions: In this paper we explore stationary gang scheduling for a set of sporadic real- time tasks with constrained deadlines (i.e., the relative" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,sun2024strict,\cite{sun2024strict},Strict Partitioning for Sporadic Rigid Gang Tasks,http://arxiv.org/abs/2403.10726v2,"The rigid gang task model is based on the idea of executing multiple threads simultaneously on a fixed number of processors to increase efficiency and performance. Although there is extensive literature on global rigid gang scheduling, partitioned approaches have several practical advantages (e.g., task isolation and reduced scheduling overheads). In this paper, we propose a new partitioned scheduling strategy for rigid gang tasks, named strict partitioning. The method creates disjoint partitions of tasks and processors to avoid inter-partition interference. Moreover, it tries to assign tasks with similar volumes (i.e., parallelisms) to the same partition so that the intra-partition interference can be reduced. Within each partition, the tasks can be scheduled using any type of scheduler, which allows the use of a less pessimistic schedulability test. Extensive synthetic experiments and a case study based on Edge TPU benchmarks show that strict partitioning achieves better schedulability performance than state-of-the-art global gang schedulability analyses for both preemptive and non-preemptive rigid gang task sets.",True,True,"Sun, Binqi and Kloda, Tomasz and Caccamo, Marco",2024.0,,,,,Strict Partitioning for Sporadic Rigid Gang Tasks,Strict Partitioning for Sporadic Rigid Gang Tasks,http://arxiv.org/pdf/2403.10726v2,"The rigid gang task model is based on the idea of executing multiple threads simultaneously on a fixed number of processors to increase efficiency and performance. Although there is extensive literature on global rigid gang scheduling, partitioned approaches have several practical advantages (e.g., task isolation and reduced scheduling overheads). In this paper, we propose a new partitioned scheduling strategy for rigid gang tasks, named strict partitioning. The method creates disjoint partitions of tasks and processors to avoid inter-partition interference. Moreover, it tries to assign tasks with similar volumes (i.e., parallelisms) to the same partition so that the intra-partition interference can be reduced. Within each partition, the tasks can be scheduled using any type of scheduler, which allows the use of a less pessimistic schedulability test. Extensive synthetic experiments and a case study based on Edge TPU benchmarks show that strict partitioning achieves better schedulability performance than state-of-the-art global gang schedulability analyses for both preemptive and non-preemptive rigid gang task sets." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,sun2024partitioned,\cite{sun2024partitioned},Partitioned scheduling and parallelism assignment for real-time DNN inference tasks on multi-TPU,,,True,False,"Sun, Binqi and Kloda, Tomasz and Wu, Chu-ge and Caccamo, Marco",2024.0,,,,,Partitioned scheduling and parallelism assignment for real-time DNN inference tasks on multi-TPU,[PDF] Partitioned Scheduling and Parallelism Assignment for Real-Time ...,https://laas.hal.science/hal-04803800/document,We propose an NPG strict partitioning strategy for scheduling. DNN tasks on multi-TPU and a strict partitioning heuristic to determine the "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Zahaf:2021,\cite{Zahaf:2021},"Contention-Aware GPU Partitioning and Task-to-Partition Allocation for Real-Time Workloads",http://arxiv.org/abs/2105.10312v1,"In order to satisfy timing constraints, modern real-time applications require massively parallel accelerators such as General Purpose Graphic Processing Units (GPGPUs). Generation after generation, the number of computing clusters made available in novel GPU architectures is steadily increasing, hence, investigating suitable scheduling approaches is now mandatory. Such scheduling approaches are related to mapping different and concurrent compute kernels within the GPU computing clusters, hence grouping GPU computing clusters into schedulable partitions. In this paper we propose novel techniques to define GPU partitions; this allows us to define suitable task-to-partition allocation mechanisms in which tasks are GPU compute kernels featuring different timing requirements. Such mechanisms will take into account the interference that GPU kernels experience when running in overlapping time windows. Hence, an effective and simple way to quantify the magnitude of such interference is also presented. We demonstrate the efficiency of the proposed approaches against the classical techniques that considered the GPU as a single, non-partitionable resource.",True,True,"Zahaf, Houssam-Eddine and Olmedo, Ignacio Sanudo and Singh, Jayati and Capodieci, Nicola and Faucou, Sebastien",2021.0,,,10.1145/3453417.3453439,,"Contention-Aware GPU Partitioning and Task-to-Partition Allocation for Real-Time Workloads",Contention-Aware GPU Partitioning and Task-to- ...,http://pagesperso.ls2n.fr/~zahaf-h/research/2021/rtns_2021.pdf,"by HE Zahaf · Cited by 12 — Contention-Aware GPU Partitioning and Task-to-Partition Allocation for. Real-Time Workloads. Houssam-Eddine Zahaf, Ignacio Sañudo Olmedo, Jayati Singh, Nicola." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,abeni2022partitioning,\cite{abeni2022partitioning},Partitioning real-time workloads on multi-core virtual machines,,,True,False,"Abeni, Luca and Biondi, Alessandro and Bini, Enrico",2022.0,,,,Journal of Systems Architecture,Partitioning real-time workloads on multi-core virtual machines,[PDF] Partitioning Real-Time Workloads on Multi-Core Virtual Machines,https://iris.unito.it/retrieve/fb327f6c-20e4-4a18-b202-acf4643df773/main.pdf,"duling), this paper proposes and compares some approaches for partitioning the real-time workloads in multi-core VMs. Some of the proposed" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Mo:2023,\cite{Mo:2023},Energy Optimized Task Mapping for Reliable and Real-Time Networked Systems,,,True,False,"Mo, Lei and Zhou, Qi and Kritikakou, Angeliki and Cao, Xianghui",2023.0,,,10.1145/3584985,ACM Trans. Sen. Netw.,Energy Optimized Task Mapping for Reliable and Real-Time Networked Systems,Energy Optimized Task Mapping for Reliable and Real-Time ...,https://dl.acm.org/doi/10.1145/3584985,"Energy efficiency, real-time response, and data transmission reliability are important objectives during networked systems design." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,MDBCCP:13,\cite{MDBCCP:13},{Real-time cache management framework for multi-core architectures},,,True,False,"Mancuso, Renato and Dudko, Roman and Betti, Emiliano and Cesati, Marco and Caccamo, Marco and Pellizzoni, Rodolfo",2013.0,,,,,{Real-time cache management framework for multi-core architectures},[PDF] Cache Management and Time-triggered Scheduling for Hard Real ...,https://mediatum.ub.tum.de/attfile/1200769/hd2/incoming/2014-Apr/394974.pdf,"Real-time cache management framework for multi-core archi- tectures. In 2013 IEEE 19th Real-Time and Embedded Technology and. Applications Symposium (RTAS)," "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Kim16:EMSOFT,\cite{Kim16:EMSOFT},Real-time cache management for multi-core virtualization,,,True,False,"Kim, Hyoseung and Rajkumar, Ragunathan",2016.0,,,,,Real-time cache management for multi-core virtualization,Real-time cache management for multi-core virtualization,https://ieeexplore.ieee.org/document/7743233/,"In this paper, we propose a real-time cache management framework for multi-core virtualization. Our framework introduces two hypervisor-level techniques, vLLC" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,KWCFAS:17,\cite{KWCFAS:17},Attacking the One-Out-Of-m Multicore Problem by Combining Hardware Management with Mixed-Criticality Provisioning,,,True,False,"Kim, Namhoon and Ward, Bryan C. and Chisholm, Micaiah and Fu, Cheng-Yang and Anderson, James H. and Smith, F. Donelson",2017.0,,,,Real-Time Systems,Attacking the One-Out-Of-m Multicore Problem by Combining Hardware Management with Mixed-Criticality Provisioning,IEEE Real-Time and Embedded Technology and Applications ...,http://www.findresearch.org/conferences/conf/rtas/2016/conference.html,Attacking the One-Out-Of-m Multicore Problem by Combining Hardware Management with Mixed-Criticality Provisioning · Details. Discussion Comments: 0. "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,KSMCV:19,\cite{KSMCV:19},Deterministic memory hierarchy and virtualization for modern multi-core embedded systems,,,True,False,Tomasz {Kloda} and Marco {Solieri} and Renato {Mancuso} and Nicola {Capodieci} and Paolo {Valente} and Marko {Bertogna},2019.0,,,,,Deterministic memory hierarchy and virtualization for modern multi-core embedded systems,[PDF] The Key Role of Memory in Next-Generation Embedded Systems for ...,https://scispace.com/pdf/the-key-role-of-memory-in-next-generation-embedded-systems-43iu341iwc.pdf,Deterministic Memory Hierarchy and Virtualization for. Modern Multi-Core Embedded Systems. In 25th IEEE Real-Time and Embedded. Technology and Applications "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,xilinx-xen-cache-color,\cite{xilinx-xen-cache-color},{Xilinx Xen Support with Cache-Coloring},,,True,False,Xilinx,,,,,,{Xilinx Xen Support with Cache-Coloring},Cache Coloring: Interference-free Real-time Virtualization,https://xenproject.org/blog/cache-coloring-interference-free-real-time-virtualization/,"Stefano Stabellini from Xilinx gave a talk on Cache Coloring, a new feature for Xen that helps better support real-time workloads." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,minerva-jailhouse,\cite{minerva-jailhouse},{Memory-aware Jailhouse hypervisor},,,True,False,{Minerva Systems},,,,,,{Memory-aware Jailhouse hypervisor},"Understanding the Jailhouse hypervisor, part 1 - LWN.net",https://lwn.net/Articles/578295/,"To enable the hypervisor, Jailhouse needs to initialize its subsystems, create a Linux cell according to the system configuration, enable VT-x on each CPU, and, finally, migrate Linux into its cell to continue running in guest mode. The entry point is defined in hypervisor/setup.c as arch_entry, which is coded in assembler and resides in x86/entry.S. This code locates the per_cpu region for a given cpu_id, stores the Linux stack pointer and cpu_id in it, sets the Jailhouse stack, and calls the architecture-independent entry() function, passing it a pointer to cpu_data. It sets up paging, maps config_memory if it is present in the system configuration, checks the memory regions defined in the Linux cell descriptor for alignment and access flags, initializes the APIC, creates Jailhouse's Interrupt Descriptor Table (IDT), configures x2APIC guest (VMX non-root) access (if available), and initializes the Linux cell." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Survey-Way-Part,\cite{Survey-Way-Part},A Survey on Way-Based Cache Partitioning,,,True,False,"Das, Purnendu and Barbhuiya, Nurulla Mansur and Ranjan Roy, Bishwa",2023.0,,,,,A Survey on Way-Based Cache Partitioning,Multi-Objective Memory Bandwidth Regulation and Cache ...,https://arxiv.org/html/2505.11554v1,"A survey on way-based cache partitioning. In IEEE Silchar Subsection Conference (SILCON), pages 1–7, 2023. [18] ↑ Howard David, Chris" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,arm-dynamiciq,\cite{arm-dynamiciq},{Arm DynamIQ Shared Unit Technical Reference Manual},,,True,False,Arm,,,,,,{Arm DynamIQ Shared Unit Technical Reference Manual},Arm DynamIQ Shared Unit Technical Reference Manual,https://developer.arm.com/documentation/100453/latest/,This Technical Reference Manual is for the DynamIQ Shared Unit ( DSU ). It describes the overall structure of the DSU including the main interfaces. "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,yun2013memguard,\cite{yun2013memguard},{MemGuard}: Memory bandwidth reservation system for efficient performance isolation in multi-core platforms,,,True,False,"Yun, Heechul and Yao, Gang and Pellizzoni, Rodolfo and Caccamo, Marco and Sha, Lui",2013.0,,,,,{MemGuard}: Memory bandwidth reservation system for efficient performance isolation in multi-core platforms,SlideShare MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isolation in Multicore Platforms | PPTX,https://www.slideshare.net/slideshow/mem-guard-rtas13web-25212473/25212473,"This document describes MemGuard , an operating system mechanism for providing efficient per- core memory performance isolation on commercial off-the-shelf hardware. MemGuard uses memory bandwidth reservation to guarantee each core 's minimum memory bandwidth . It then performs predictive bandwidth" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,MemPol,\cite{MemPol},{MemPol}: Policing Core Memory Bandwidth from Outside of the Cores,,,True,False,"Alexander Zuepke and Andrea Bastoni and Weifan Chen and Marco Caccamo and Renato Mancuso",2023.0,,,,,{MemPol}: Policing Core Memory Bandwidth from Outside of the Cores,[PDF] MemPol: Policing Core Memory Bandwidth from Outside of the Cores,https://blexi.de/papers/rtas2023.pdf,"In this work, we present a novel regulation mechanism from outside the cores that monitors performance counters for the application core's" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,hassan2019reduced,\cite{hassan2019reduced},{Reduced latency DRAM for multi-core safety-critical real-time systems},,,True,False,"Hassan, Mohamed",2019.0,,,,Real-Time Systems,{Reduced latency DRAM for multi-core safety-critical real-time systems},Reduced latency DRAM for multi-core safety-critical real-time ...,https://www.ece.mcmaster.ca/faculty/hassan/assets/publications/hassan2019reduced.pdf,"by M Hassan · 2019 · Cited by 16 — Targeting these systems, we promote an alternative off-chip memory solution that is based on the emerging Reduced Latency. DRAM (RLDRAM) protocol, and propose a" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,BRU:20,\cite{BRU:20},{BRU: Bandwidth Regulation Unit for Real-Time Multicore Processors},,,True,False,"Farshchi, Farzad and Huang, Qijing and Yun, Heechul",2020.0,,,,,{BRU: Bandwidth Regulation Unit for Real-Time Multicore Processors},BRU: Bandwidth Regulation Unit for Real-Time Multicore ...,https://github.com/CSL-KU/bru-firesim,BRU: Bandwidth Regulation Unit for Real-Time Multicore Processors. This repository contains the necessary files to reproduce the experiment results in the "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,intel-rdt,\cite{intel-rdt},{Resource Director Technology},,,True,False,Intel,,,,,,{Resource Director Technology},Intel® Resource Director Technology (Intel® RDT),https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html,"Intel® Resource Director Technology enables monitoring and control over shared processor resources, improved consolidation density and reduced TCO." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,XPCLLLL:19,\cite{XPCLLLL:19},Holistic resource allocation for multicore real-time systems,,,True,False,"Xu, Meng and Phan, Linh Thi Xuan and Choi, Hyon-Young and Lin, Yuhan and Li, Haoran and Lu, Chenyang and Lee, Insup",2019.0,,,,,Holistic resource allocation for multicore real-time systems,Holistic resource allocation for multicore real-time systems ...,https://www.cis.upenn.edu/~linhphan/papers/rtas19-CaM-techreport.pdf,"Rather than decoupling them, we compute the mapping of tasks and the allocation of shared resources to cores in an integrated resource allocation strategy called CaM that considers the demands on CPU, cache, and memory bandwidth concurrently to minimize resources while ensuring timing guarantees. Existing work has also investigated the impact of other types of shared resources on real-time performance, including e.g., impacts of Miss Status Holding Registers (MSHR) on partitioned caches [57], interferences due to memory bank contention [32, 56, 69], TLB interferences [46], and effects of interrupts [14, 30, 47, 50]. CONCLUSION We have presented a resource allocation strategy for real-time multicore systems that considers CPU resource demand of a task and the allocation of cache and memory bandwidth resources in a holistic manner." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,SBMYK:22,\cite{SBMYK:22},{A Closer Look at Intel Resource Director Technology (RDT)},,,True,False,"Sohal, Parul and Bechtel, Michael and Mancuso, Renato and Yun, Heechul and Krieger, Orran",2022.0,,,,,{A Closer Look at Intel Resource Director Technology (RDT)},A Closer Look at Intel Resource Director Technology (RDT),https://dl.acm.org/doi/abs/10.1145/3534879.3534882,We aim at conducting a systematic investigation of the RDT mechanisms from a real-time perspective. We experimentally evaluate the functionality and "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,arm-mpam,\cite{arm-mpam},{Arm Memory System Resource Partitioning and Monitoring (MPAM) System Component Specification},,,True,False,Arm,,,,,,{Arm Memory System Resource Partitioning and Monitoring (MPAM) System Component Specification},Memory System Resource Partitioning and Monitoring (MPAM ...,https://developer.arm.com/documentation/107768/latest/Overview,"MPAM is an optional Arm architecture addition for memory system partitioning. It's documented in two specifications, one for processor features and one for" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Altmeyer:2014,\cite{Altmeyer:2014},OUTSTANDING PAPER: Evaluation of Cache Partitioning for Hard Real-Time Systems,,,True,False,"Altmeyer, Sebastian and Douma, Roeland and Lunniss, Will and Davis, Robert I.",2014.0,,,10.1109/ECRTS.2014.11,,OUTSTANDING PAPER: Evaluation of Cache Partitioning for Hard Real-Time Systems,Will Lunniss - Google Scholar,https://scholar.google.com/citations?user=v_HmmSsAAAAJ&hl=en,"Outstanding paper: Evaluation of cache partitioning for hard real-time systems. S Altmeyer, R Douma, W Lunniss, RI Davis. 2014 26th Euromicro Conference on Real" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Altmeyer:2016,\cite{Altmeyer:2016},On the effectiveness of cache partitioning in hard real-time systems,,,True,False,Sebastian A. Altmeyer and Roeland Douma and Will Lunniss and Robert I. Davis,2016.0,,,,Real Time Systems,On the effectiveness of cache partitioning in hard real-time systems,[PDF] On the effectiveness of cache partitioning in hard real-time systems,https://eprints.whiterose.ac.uk/id/eprint/93504/1/art_3A10.1007_2Fs11241_015_9246_8.pdf,Cache partitioning is often suggested as a means of increasing the predictability of caches in pre-emptively scheduled hard real-time systems. The rationale "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Bui:2008,\cite{Bui:2008},Impact of cache partitioning on multi-tasking real time embedded systems,,,True,False,"Bui, Bach D. and Caccamo, Marco and Sha, Lui and Martinez, Joseph",2008.0,,,,,Impact of cache partitioning on multi-tasking real time embedded systems,Impact of cache partitioning on multi-tasking real time ...,https://experts.illinois.edu/en/publications/impact-of-cache-partitioning-on-multi-tasking-real-time-embedded-,"by BD Bui · 2008 · Cited by 159 — A case study and experiments show that in a typical real-time embedded system, the proposed algorithm is able to reduce the worst-case utilization by 15% (on" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Meroni:2023,\cite{Meroni:2023},Mapping and Integration of Event- and Time-triggered Real-time Tasks on Partitioned Multi-core Systems,,,True,False,"Meroni, Carlo and Craciunas, Silviu S. and Finzi, Anaïs and Pop, Paul",2023.0,,,10.1109/ETFA54631.2023.10275547,,Mapping and Integration of Event- and Time-triggered Real-time Tasks on Partitioned Multi-core Systems,Multi-Objective Memory Bandwidth Regulation and Cache ... - DROPS,https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2025.2,Mapping and integration of event- and time-triggered real-time tasks on partitioned multi-core systems. In 2023 IEEE 28th International Conference on "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,sun2023minimizing,\cite{sun2023minimizing},Minimizing cache usage for real-time systems,,,True,False,"Sun, Binqi and Kloda, Tomasz and Arribas Garcia, Sergio and Gracioli, Giovani and Caccamo, Marco",2023.0,,,,,Minimizing cache usage for real-time systems,[PDF] Minimizing Cache Usage for Real-time Systems - LAAS - HAL,https://laas.hal.science/hal-04803571v1/document,The main benefit of cache partitioning in real-time systems is that it removes inter-task interference: preempting task will not evict the "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,sun2024minimizing,\cite{sun2024minimizing},Minimizing cache usage with fixed-priority and earliest deadline first scheduling,,,True,False,"Sun, Binqi and Kloda, Tomasz and Garcia, Sergio Arribas and Gracioli, Giovani and Caccamo, Marco",2024.0,,,,Real-Time Systems,Minimizing cache usage with fixed-priority and earliest deadline first scheduling,Minimizing cache usage with fixed-priority and earliest deadline first ...,https://ofb.hal.science/LAAS-INFORMATIQUE-CRITIQUE/hal-04803808v1,"Cache partitioning is a technique to reduce interference among tasks running on the processors with shared caches. To make this technique effective," "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,aghilinasab2020dynamic,\cite{aghilinasab2020dynamic},Dynamic memory bandwidth allocation for real-time GPU-based SoC platforms,,,True,False,"Aghilinasab, Homa and Ali, Waqar and Yun, Heechul and Pellizzoni, Rodolfo",2020.0,,,,IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,Dynamic memory bandwidth allocation for real-time GPU-based SoC platforms,Appendix to: Dynamic Memory Bandwidth Allocation For Real-Time ...,https://uwspace.uwaterloo.ca/items/12c07957-e40a-4208-a0bc-dbdd7583997c,Appendix to: Dynamic Memory Bandwidth Allocation For Real-Time GPU-Based SOC Platforms. The Libraries will be performing routine maintenance on UWSpace on July "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,park2019copart,\cite{park2019copart},Copart: Coordinated partitioning of last-level cache and memory bandwidth for fairness-aware workload consolidation on commodity servers,,,True,False,"Park, Jinsu and Park, Seongbeom and Baek, Woongki",2019.0,,,,,Copart: Coordinated partitioning of last-level cache and memory bandwidth for fairness-aware workload consolidation on commodity servers,[PDF] bandwidth allocation and cache partitioning for multicore processors,https://zaguan.unizar.es/record/124424/files/texto_completo.pdf,(2019) CoPart: coordinated partitioning of last-level cache and memory bandwidth for fairness-aware workload consolidation on commodity servers. In: EuroSys. "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,CWKA:15,\cite{CWKA:15},Cache sharing and isolation tradeoffs in multicore mixed-criticality systems,,,True,False,"Chisholm, Micaiah and Ward, Bryan C. and Kim, Namhoon and Anderson, James H.",2015.0,,,,,Cache sharing and isolation tradeoffs in multicore mixed-criticality systems,REPORT DOCUMENTATION PAGE - DTIC,https://apps.dtic.mil/sti/pdfs/AD1051095.pdf,by JH Anderson · 2017 — Paper Title: Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems. Publication Type: Conference Paper or Presentation. "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Berna:2012,\cite{Berna:2012},{PDPA}: Period Driven Task and Cache Partitioning Algorithm for Multi-Core Systems,,,True,False,"Berna, Brice and Puaut, Isabelle",2012.0,,,,,{PDPA}: Period Driven Task and Cache Partitioning Algorithm for Multi-Core Systems,PDPA: Period Driven Task and Cache Partitioning ...,http://www.irisa.fr/alf/downloads/puaut/papers/RTNS2012pdpa.pdf,"by B Berna · 2012 · Cited by 25 — In this paper, we present a new algorithm for joint task and cache partitioning in multi-core systems scheduled using non-preemptive EDF. The main novelty of ...See more" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Paolieri:2011,\cite{Paolieri:2011},{$IA^3$}: An Interference Aware Allocation Algorithm for Multicore Hard Real-Time Systems,,,True,False,"Paolieri, Marco and Quiñones, Eduardo and Cazorla, Francisco J. and Davis, Robert I. and Valero, Mateo",2011.0,,,,,{$IA^3$}: An Interference Aware Allocation Algorithm for Multicore Hard Real-Time Systems,IA^3: An Interference Aware Allocation Algorithm for Multicore ...,http://ieeexplore.ieee.org/document/5767118/similar,In this paper we introduce IA3: an interference-aware allocation algorithm that considers not a single WCET estimation but a set of WCET estimations per task. "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,sun2023co,\cite{sun2023co},"Co-Optimizing Cache Partitioning and Multi-Core Task Scheduling: Exploit Cache Sensitivity or Not?",http://arxiv.org/abs/2310.02959v1,"Cache partitioning techniques have been successfully adopted to mitigate interference among concurrently executing real-time tasks on multi-core processors. Considering that the execution time of a cache-sensitive task strongly depends on the cache available for it to use, co-optimizing cache partitioning and task allocation improves the system's schedulability. In this paper, we propose a hybrid multi-layer design space exploration technique to solve this multi-resource management problem. We explore the interplay between cache partitioning and schedulability by systematically interleaving three optimization layers, viz., (i) in the outer layer, we perform a breadth-first search combined with proactive pruning for cache partitioning; (ii) in the middle layer, we exploit a first-fit heuristic for allocating tasks to cores; and (iii) in the inner layer, we use the well-known recurrence relation for the schedulability analysis of non-preemptive fixed-priority (NP-FP) tasks in a uniprocessor setting. Although our focus is on NP-FP scheduling, we evaluate the flexibility of our framework in supporting different scheduling policies (NP-EDF, P-EDF) by plugging in appropriate analysis methods in the inner layer. Experiments show that, compared to the state-of-the-art techniques, the proposed framework can improve the real-time schedulability of NP-FP task sets by an average of 15.2% with a maximum improvement of 233.6% (when tasks are highly cache-sensitive) and a minimum of 1.6% (when cache sensitivity is low). For such task sets, we found that clustering similar-period (or mutually compatible) tasks often leads to higher schedulability (on average 7.6%) than clustering by cache sensitivity. In our evaluation, the framework also achieves good results for preemptive and dynamic-priority scheduling policies.",True,True,"Sun, Binqi and Roy, Debayan and Kloda, Tomasz and Bastoni, Andrea and Pellizzoni, Rodolfo and Caccamo, Marco",2023.0,,,,,"Co-Optimizing Cache Partitioning and Multi-Core Task Scheduling: Exploit Cache Sensitivity or Not?",Co-Optimizing Cache Partitioning and Multi-Core Task Scheduling,https://arxiv.org/abs/2310.02959,"Considering that the execution time of a cache-sensitive task strongly depends on the cache available for it to use, co-optimizing cache" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Meng:2019,\cite{Meng:2019},Holistic multi-resource allocation for multicore real-time virtualization,,,True,False,"Xu, Meng and Gifford, Robert and Phan, Linh Thi Xuan",2019.0,,,,,Holistic multi-resource allocation for multicore real-time virtualization,Holistic multi-resource allocation for multicore real-time virtualization,https://dl.acm.org/doi/10.1145/3316781.3317840,"This paper presents vC2M, a holistic multi-resource allocation framework for real-time multicore virtualization. vC2M integrates shared cache allocation" "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,Nie:2022,\cite{Nie:2022},Holistic Resource Allocation Under Federated Scheduling for Parallel Real-time Tasks,,,True,False,"Nie, Lanshun and Fan, Chenghao and Lin, Shuang and Zhang, Li and Li, Yajuan and Li, Jing",2022.0,,,,ACM Trans. Embed. Comput. Syst.,Holistic Resource Allocation Under Federated Scheduling for Parallel Real-time Tasks,Holistic Resource Allocation Under Federated Scheduling for ...,https://dl.acm.org/doi/10.1145/3489467,"To tackle this issue, in this work, we present a holistic resource allocation framework for parallel real-time tasks under federated scheduling." "Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems",2505.11554v1,gifford2021dna,\cite{gifford2021dna},{DNA}: Dynamic resource allocation for soft real-time multicore systems,,,True,False,"Gifford, Robert and Gandhi, Neeraj and Phan, Linh Thi Xuan and Haeberlen, Andreas",2021.0,,,,,{DNA}: Dynamic resource allocation for soft real-time multicore systems,[PDF] Dynamic Resource Allocation for Soft Real-Time Multicore Systems,https://www.cis.upenn.edu/~linhphan/papers/rtas21-dna.pdf,"DNA increases throughput and decreases latency, by building an execution profile of each task to identify the phases, and then dynamically allocating resources" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,zhou2021ultra,\cite{zhou2021ultra},Ultra efficient acceleration for de novo genome assembly via near-memory computing,,,True,False,"Zhou, Minxuan and Wu, Lingxi and Li, Muzhou and Moshiri, Niema and Skadron, Kevin and Rosing, Tajana",2021.0,,,,,Ultra efficient acceleration for de novo genome assembly via near-memory computing,Ultra Efficient Acceleration for De Novo Genome Assembly via ...,https://dl.acm.org/doi/10.1109/PACT52795.2021.00022,"In this work, we accelerate two key performance bottlenecks of DBG-based assembly, graph construction and graph traversal, with a near-data" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,li2015megahit,\cite{li2015megahit},"MEGAHIT: An ultra-fast single-node solution for large and complex metagenomics assembly via succinct de Bruijn graph",http://arxiv.org/abs/1409.7208v2,"MEGAHIT is a NGS de novo assembler for assembling large and complex metagenomics data in a time- and cost-efficient manner. It finished assembling a soil metagenomics dataset with 252Gbps in 44.1 hours and 99.6 hours on a single computing node with and without a GPU, respectively. MEGAHIT assembles the data as a whole, i.e., it avoids pre-processing like partitioning and normalization, which might compromise on result integrity. MEGAHIT generates 3 times larger assembly, with longer contig N50 and average contig length than the previous assembly. 55.8% of the reads were aligned to the assembly, which is 4 times higher than the previous. The source code of MEGAHIT is freely available at https://github.com/voutcn/megahit under GPLv3 license.",True,True,"Li, Dinghua and Liu, Chi-Man and Luo, Ruibang and Sadakane, Kunihiko and Lam, Tak-Wah",2015.0,,,,Bioinformatics,"MEGAHIT: An ultra-fast single-node solution for large and complex metagenomics assembly via succinct de Bruijn graph",Polyphenol rewiring of the microbiome reduces methane emissions,https://academic.oup.com/ismej/advance-article/doi/10.1093/ismejo/wraf108/8152721?searchresult=1,08 April 2025. Accepted: 23 ... et al. MEGAHIT: an ultra-fast single-node solution for large and complex metagenomics assembly via succinct de Bruijn graph. "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,angizi2020pim,\cite{angizi2020pim},Pim-assembler: A processing-in-memory platform for genome assembly,,,True,False,"Angizi, Shaahin and Fahmi, Naima Ahmed and Zhang, Wei and Fan, Deliang",2020.0,,,,,Pim-assembler: A processing-in-memory platform for genome assembly,PIM-assembler: a processing-in-memory platform for genome as,https://dl.acm.org/doi/pdf/10.5555/3437539.3437694,We first develop PIM-Assembler platform that harnesses DRAM as computational memory and transforms it to a fundamental processing unit for genome assembly. "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,wu2024abakus,\cite{wu2024abakus},Abakus: Accelerating k-mer Counting with Storage Technology,,,True,False,"Wu, Lingxi and Zhou, Minxuan and Xu, Weihong and Venkat, Ashish and Rosing, Tajana and Skadron, Kevin",2024.0,,,,ACM Transactions on Architecture and Code Optimization,Abakus: Accelerating k-mer Counting with Storage Technology,Abakus: Accelerating k -mer Counting with Storage Technology,https://ouci.dntb.gov.ua/en/works/4MwkkJv9/,"Our evaluation suggests that Abakus can achieve 8.42×, 6.91×, and 2.32× speedup over the CPU-, GPU-, and near-data processing solutions." "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,awan2021accelerating,\cite{awan2021accelerating},Accelerating large scale de novo metagenome assembly using GPUs,,,True,False,"Awan, Muaaz Gul and Hofmeyr, Steven and Egan, Rob and Ding, Nan and Buluc, Aydin and Deslippe, Jack and Oliker, Leonid and Yelick, Katherine",2021.0,,,,,Accelerating large scale de novo metagenome assembly using GPUs,Accelerating Large Scale de novo Metagenome Assembly ...,https://par.nsf.gov/servlets/purl/10389252,"by MG Awan · 2021 · Cited by 16 — Some of the popular GPU-accelerated genome assemblers include LaSAGNA [8], which uses the overlap string- graph approach and utilizes GPUs for" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,goswami2018gpu,\cite{goswami2018gpu},Gpu-accelerated large-scale genome assembly,,,True,False,"Goswami, Sayan and Lee, Kisung and Shams, Shayan and Park, Seung-Jong",2018.0,,,,,Gpu-accelerated large-scale genome assembly,GPU-Accelerated Large-Scale Genome Assembly,https://www.computer.org/csdl/proceedings-article/ipdps/2018/436801a814/12OmNvTBB9c,"In this paper, we present a new GPU-accelerated genome assembler called LaSAGNA, which can assemble large-scale sequence datasets using a single GPU by building" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,mahmood2011gpu,\cite{mahmood2011gpu},Gpu-euler: Sequence assembly using gpgpu,,,True,False,"Mahmood, Syed Faraz and Rangwala, Huzefa",2011.0,,,,,Gpu-euler: Sequence assembly using gpgpu,[PDF] GPU-Euler: Sequence Assembly using GPGPU - GMU CS Department,https://cs.gmu.edu/media/techreports/GMU-CS-TR-2011-1.pdf,"In this paper, we investigated the effectiveness and feasibility of graph-based sequence assembly mod- els on GPUs using the CUDA programming interface. nVidia" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,jain2013gagm,\cite{jain2013gagm},GAGM: Genome assembly on GPU using mate pairs,,,True,False,"Jain, Ashutosh and Garg, Anshuj and Paul, Kolin",2013.0,,,,,GAGM: Genome assembly on GPU using mate pairs,GAGM: Genome assembly on GPU using mate pairs - IEEE Xplore,https://ieeexplore.ieee.org/document/6799107/,In this paper we present the design and development of a GPU based assembler (GAGM) for sequence assembly using Nvidia's GPUs with the CUDA programming model. "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,swiercz2018grasshopper,\cite{swiercz2018grasshopper},GRASShopPER—An algorithm for de novo assembly based on GPU alignments,,,True,False,"Swiercz, Aleksandra and Frohmberg, Wojciech and Kierzynka, Michal and Wojciechowski, Pawel and Zurkowski, Piotr and Badura, Jan and Laskowski, Artur and Kasprzak, Marta and Blazewicz, Jacek",2018.0,,,,PloS one,GRASShopPER—An algorithm for de novo assembly based on GPU alignments,GRASShopPER-An algorithm for de novo assembly based on GPU ...,https://pubmed.ncbi.nlm.nih.gov/30114279/,It uses an efficient GPU implementation for the sequence alignment during the graph construction stage and a greedy hyper-heuristic algorithm at "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,bwa-mem2,\cite{bwa-mem2},"Efficient Architecture-Aware Acceleration of BWA-MEM for Multicore Systems",http://arxiv.org/abs/1907.12931v1,"Innovations in Next-Generation Sequencing are enabling generation of DNA sequence data at ever faster rates and at very low cost. Large sequencing centers typically employ hundreds of such systems. Such high-throughput and low-cost generation of data underscores the need for commensurate acceleration in downstream computational analysis of the sequencing data. A fundamental step in downstream analysis is mapping of the reads to a long reference DNA sequence, such as a reference human genome. Sequence mapping is a compute-intensive step that accounts for more than 30% of the overall time of the GATK workflow. BWA-MEM is one of the most widely used tools for sequence mapping and has tens of thousands of users. In this work, we focus on accelerating BWA-MEM through an efficient architecture aware implementation, while maintaining identical output. The volume of data requires distributed computing environment, usually deploying multicore processors. Since the application can be easily parallelized for distributed memory systems, we focus on performance improvements on a single socket multicore processor. BWA-MEM run time is dominated by three kernels, collectively responsible for more than 85% of the overall compute time. We improved the performance of these kernels by 1) improving cache reuse, 2) simplifying the algorithms, 3) replacing small fragmented memory allocations with a few large contiguous ones, 4) software prefetching, and 5) SIMD utilization wherever applicable - and massive reorganization of the source code enabling these improvements. As a result, we achieved nearly 2x, 183x, and 8x speedups on the three kernels, respectively, resulting in up to 3.5x and 2.4x speedups on end-to-end compute time over the original BWA-MEM on single thread and single socket of Intel Xeon Skylake processor. To the best of our knowledge, this is the highest reported speedup over BWA-MEM.",True,True,"Vasimuddin, Md. and Misra, Sanchit and Li, Heng and Aluru, Srinivas",2019.0,,,10.1109/IPDPS.2019.00041,,"Efficient Architecture-Aware Acceleration of BWA-MEM for Multicore Systems",bwa-mem2/bwa-mem2: The next version of bwa-mem - GitHub,https://github.com/bwa-mem2/bwa-mem2,The tool bwa-mem2 is the next version of the bwa-mem algorithm in bwa. It ... Efficient Architecture-Aware Acceleration of BWA-MEM for Multicore Systems. "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,minimap2,\cite{minimap2},Minimap2: pairwise alignment for nucleotide sequences,http://arxiv.org/abs/1708.01492v5,"Motivation: Recent advances in sequencing technologies promise ultra-long reads of $\sim$100 kilo bases (kb) in average, full-length mRNA or cDNA reads in high throughput and genomic contigs over 100 mega bases (Mb) in length. Existing alignment programs are unable or inefficient to process such data at scale, which presses for the development of new alignment algorithms. Results: Minimap2 is a general-purpose alignment program to map DNA or long mRNA sequences against a large reference database. It works with accurate short reads of $\ge$100bp in length, $\ge$1kb genomic reads at error rate $\sim$15%, full-length noisy Direct RNA or cDNA reads, and assembly contigs or closely related full chromosomes of hundreds of megabases in length. Minimap2 does split-read alignment, employs concave gap cost for long insertions and deletions (INDELs) and introduces new heuristics to reduce spurious alignments. It is 3-4 times faster than mainstream short-read mappers at comparable accuracy and $\ge$30 times faster at higher accuracy for both genomic and mRNA reads, surpassing most aligners specialized in one type of alignment. Availability and implementation: https://github.com/lh3/minimap2 Contact: hengli@broadinstitute.org",True,True,"Li, Heng",2018.0,05,https://doi.org/10.1093/bioinformatics/bty191,10.1093/bioinformatics/bty191,Bioinformatics,Minimap2: pairwise alignment for nucleotide sequences,Minimap2: pairwise alignment for nucleotide sequences,http://arxiv.org/pdf/1708.01492v5,"Motivation: Recent advances in sequencing technologies promise ultra-long reads of $\sim$100 kilo bases (kb) in average, full-length mRNA or cDNA reads in high throughput and genomic contigs over 100 mega bases (Mb) in length. Existing alignment programs are unable or inefficient to process such data at scale, which presses for the development of new alignment algorithms. Results: Minimap2 is a general-purpose alignment program to map DNA or long mRNA sequences against a large reference database. It works with accurate short reads of $\ge$100bp in length, $\ge$1kb genomic reads at error rate $\sim$15%, full-length noisy Direct RNA or cDNA reads, and assembly contigs or closely related full chromosomes of hundreds of megabases in length. Minimap2 does split-read alignment, employs concave gap cost for long insertions and deletions (INDELs) and introduces new heuristics to reduce spurious alignments. It is 3-4 times faster than mainstream short-read mappers at comparable accuracy and $\ge$30 times faster at higher accuracy for both genomic and mRNA reads, surpassing most aligners specialized in one type of alignment. Availability and implementation: https://github.com/lh3/minimap2 Contact: hengli@broadinstitute.org" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,mm2-fast,\cite{mm2-fast},Accelerating minimap2 for long-read sequencing applications on modern CPUs,,,True,False,"Kalikar, Saurabh and Jain, Chirag and Vasimuddin, Md and Misra, Sanchit",2022.0,,,,Nature Computational Science,Accelerating minimap2 for long-read sequencing applications on modern CPUs,Accelerating long-read analysis on modern CPUs,https://www.biorxiv.org/content/10.1101/2021.07.21.453294v2,"We present techniques to accelerate minimap2, a widely used software for mapping. We present multiple optimizations using SIMD parallelization, efficient cache" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,mm2-ax,\cite{mm2-ax},Accelerating Minimap2 for accurate long read alignment on GPUs,,,True,False,"Sadasivan, Harisankar and Maric, Milos and Dawson, Eric and Iyer, Vishanth and Israeli, Johnny and Narayanasamy, Satish",2023.0,,,,Journal of biotechnology and biomedicine,Accelerating Minimap2 for accurate long read alignment on GPUs,Accelerating Minimap2 for Accurate Long Read Alignment on GPUs,https://pmc.ncbi.nlm.nih.gov/articles/PMC10018915/,We present minimap2-accelerated (mm2-ax) which speeds up minimap2 (mm2) on the GPU without losing mapping accuracy and demonstrate its time and cost benefits. "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,mm2-gb,\cite{mm2-gb},mm2-gb: GPU Accelerated Minimap2 for Long Read DNA Mapping,,,True,False,"Dong, Juechu and Liu, Xueshen and Sadasivan, Harisankar and Sitaraman, Sriranjani and Narayanasamy, Satish",2024.0,,,,bioRxiv,mm2-gb: GPU Accelerated Minimap2 for Long Read DNA Mapping,GPU Accelerated Minimap2 for Long Read DNA Mapping,https://www.biorxiv.org/content/10.1101/2024.03.23.586366v2,"We show that mm2-gb on an AMD Instinct™ MI210 GPU achieves 2.57-5.33x performance improvement on long nanopore reads (10kb-100kb), and up to" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,guo2019hardware,\cite{guo2019hardware},Hardware acceleration of long read pairwise overlapping in genome sequencing: A race between fpga and gpu,,,True,False,"Guo, Licheng and Lau, Jason and Ruan, Zhenyuan and Wei, Peng and Cong, Jason",2019.0,,,,,Hardware acceleration of long read pairwise overlapping in genome sequencing: A race between fpga and gpu,UCLA-VAST/minimap2-acceleration,https://github.com/UCLA-VAST/minimap2-acceleration,"Ruan, P. Wei, and J. Cong, “Hardware Acceleration of Long Read Pairwise Overlapping in Genome Sequencing: A Race Between FPGA and GPU,” in 2019 IEEE" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,cali2022segram,\cite{cali2022segram},"SeGraM: A Universal Hardware Accelerator for Genomic Sequence-to-Graph and Sequence-to-Sequence Mapping",http://arxiv.org/abs/2205.05883v2,"A critical step of genome sequence analysis is the mapping of sequenced DNA fragments (i.e., reads) collected from an individual to a known linear reference genome sequence (i.e., sequence-to-sequence mapping). Recent works replace the linear reference sequence with a graph-based representation of the reference genome, which captures the genetic variations and diversity across many individuals in a population. Mapping reads to the graph-based reference genome (i.e., sequence-to-graph mapping) results in notable quality improvements in genome analysis. Unfortunately, while sequence-to-sequence mapping is well studied with many available tools and accelerators, sequence-to-graph mapping is a more difficult computational problem, with a much smaller number of practical software tools currently available. We analyze two state-of-the-art sequence-to-graph mapping tools and reveal four key issues. We find that there is a pressing need to have a specialized, high-performance, scalable, and low-cost algorithm/hardware co-design that alleviates bottlenecks in both the seeding and alignment steps of sequence-to-graph mapping. To this end, we propose SeGraM, a universal algorithm/hardware co-designed genomic mapping accelerator that can effectively and efficiently support both sequence-to-graph mapping and sequence-to-sequence mapping, for both short and long reads. To our knowledge, SeGraM is the first algorithm/hardware co-design for accelerating sequence-to-graph mapping. SeGraM consists of two main components: (1) MinSeed, the first minimizer-based seeding accelerator; and (2) BitAlign, the first bitvector-based sequence-to-graph alignment accelerator. We demonstrate that SeGraM provides significant improvements for multiple steps of the sequence-to-graph and sequence-to-sequence mapping pipelines.",True,True,"Cali, Damla Senol and Kanellopoulos, Konstantinos and Lindegger, Jo{\""e}l and Bing{\""o}l, Z{\""u}lal and Kalsi, Gurpreet S and Zuo, Ziyi and Firtina, Can and Cavlak, Meryem Banu and Kim, Jeremie and Ghiasi, Nika Mansouri and others",2022.0,,,,,"SeGraM: A Universal Hardware Accelerator for Genomic Sequence-to-Graph and Sequence-to-Sequence Mapping",a universal hardware accelerator for genomic sequence-to- ...,https://www.researchgate.net/publication/361236225_SeGraM_a_universal_hardware_accelerator_for_genomic_sequence-to-graph_and_sequence-to-sequence_mapping,"SeGraM is the first hardware acceleration framework for sequence-to-graph mapping and alignment, where it provides an order of magnitude faster and more energy" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,zhang2024harp,\cite{zhang2024harp},Harp: Leveraging Quasi-Sequential Characteristics to Accelerate Sequence-to-Graph Mapping of Long Reads,,,True,False,"Zhang, Yichi and Chen, Dibei and Zeng, Gang and Zhu, Jianfeng and Li, Zhaoshi and Chen, Longlong and Wei, Shaojun and Liu, Leibo",2024.0,,,,,Harp: Leveraging Quasi-Sequential Characteristics to Accelerate Sequence-to-Graph Mapping of Long Reads,Leveraging Quasi-Sequential Characteristics to Accelerate ...,https://www.researchgate.net/publication/380150876_Harp_Leveraging_Quasi-Sequential_Characteristics_to_Accelerate_Sequence-to-Graph_Mapping_of_Long_Reads,Harp: Leveraging Quasi-Sequential Characteristics to Accelerate Sequence-to-Graph Mapping of Long Reads. April 2024. DOI:10.1145/3620666.3651331. License; CC BY "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,ghiasi2024megis,\cite{ghiasi2024megis},"MegIS: High-Performance, Energy-Efficient, and Low-Cost Metagenomic Analysis with In-Storage Processing",http://arxiv.org/abs/2406.19113v1,"Metagenomics has led to significant advances in many fields. Metagenomic analysis commonly involves the key tasks of determining the species present in a sample and their relative abundances. These tasks require searching large metagenomic databases. Metagenomic analysis suffers from significant data movement overhead due to moving large amounts of low-reuse data from the storage system. In-storage processing can be a fundamental solution for reducing this overhead. However, designing an in-storage processing system for metagenomics is challenging because existing approaches to metagenomic analysis cannot be directly implemented in storage effectively due to the hardware limitations of modern SSDs. We propose MegIS, the first in-storage processing system designed to significantly reduce the data movement overhead of the end-to-end metagenomic analysis pipeline. MegIS is enabled by our lightweight design that effectively leverages and orchestrates processing inside and outside the storage system. We address in-storage processing challenges for metagenomics via specialized and efficient 1) task partitioning, 2) data/computation flow coordination, 3) storage technology-aware algorithmic optimizations, 4) data mapping, and 5) lightweight in-storage accelerators. MegIS's design is flexible, capable of supporting different types of metagenomic input datasets, and can be integrated into various metagenomic analysis pipelines. Our evaluation shows that MegIS outperforms the state-of-the-art performance- and accuracy-optimized software metagenomic tools by 2.7$\times$-37.2$\times$ and 6.9$\times$-100.2$\times$, respectively, while matching the accuracy of the accuracy-optimized tool. MegIS achieves 1.5$\times$-5.1$\times$ speedup compared to the state-of-the-art metagenomic hardware-accelerated (using processing-in-memory) tool, while achieving significantly higher accuracy.",True,True,"Ghiasi, Nika Mansouri and Sadrosadati, Mohammad and Mustafa, Harun and Gollwitzer, Arvid and Firtina, Can and Eudine, Julien and Mao, Haiyu and Lindegger, Jo{\""e}l and Cavlak, Meryem Banu and Alser, Mohammed and others",2024.0,,,,,"MegIS: High-Performance, Energy-Efficient, and Low-Cost Metagenomic Analysis with In-Storage Processing","[PDF] MegIS: High-Performance, Energy-Efficient, and Low-Cost ... - arXiv",https://arxiv.org/pdf/2406.19113,"Through our detailed analysis of the end-to-end metagenomic analysis pipeline and careful hardware/software co-design, we address in-storage" "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,gu2023gendp,\cite{gu2023gendp},GenDP: A Framework of Dynamic Programming Acceleration for Genome Sequencing Analysis,,,True,False,"Gu, Yufeng and Subramaniyan, Arun and Dunn, Tim and Khadem, Alireza and Chen, Kuan-Yu and Paul, Somnath and Vasimuddin, Md and Misra, Sanchit and Blaauw, David and Narayanasamy, Satish and others",2023.0,,,,,GenDP: A Framework of Dynamic Programming Acceleration for Genome Sequencing Analysis,GenDP: A Framework of Dynamic Programming Acceleration for ...,https://dl.acm.org/doi/10.1145/3712168,"This paper presents GenDP, a framework of dynamic programming acceleration including DPAx, a DP accelerator, and DPMap, a graph-partitioning algorithm." "NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly",2505.08071v1,pavon2024quetzal,\cite{pavon2024quetzal},QUETZAL: Vector Acceleration Framework for Modern Genome Sequence Analysis Algorithms,,,True,False,"Pavon, Julian and Valdivieso, Ivan Vargas and Rojas, Carlos and Hernandez, Cesar and Aslan, Mehmet and Figueras, Roger and Yuan, Yichao and Lindegger, Jo{\""e}l and Alser, Mohammed and Moll, Francesc and others",2024.0,,,,,QUETZAL: Vector Acceleration Framework for Modern Genome Sequence Analysis Algorithms,QUETZAL: Vector Acceleration Framework for Modern Genome ...,https://ieeexplore.ieee.org/document/10609714/,"QUETZAL significantly accelerates a vectorized CPU baseline on modern genome sequence analysis algorithms by 5.7×, while incurring a small area overhead of 1.4%" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,bts,\cite{bts},BTS: An Accelerator for Bootstrappable Fully Homomorphic Encryption,http://arxiv.org/abs/2112.15479v2,"Homomorphic encryption (HE) enables the secure offloading of computations to the cloud by providing computation on encrypted data (ciphertexts). HE is based on noisy encryption schemes in which noise accumulates as more computations are applied to the data. The limited number of operations applicable to the data prevents practical applications from exploiting HE. Bootstrapping enables an unlimited number of operations or fully HE (FHE) by refreshing the ciphertext. Unfortunately, bootstrapping requires a significant amount of additional computation and memory bandwidth as well. Prior works have proposed hardware accelerators for computation primitives of FHE. However, to the best of our knowledge, this is the first to propose a hardware FHE accelerator that supports bootstrapping as a first-class citizen. In particular, we propose BTS - Bootstrappable, Technologydriven, Secure accelerator architecture for FHE. We identify the challenges of supporting bootstrapping in the accelerator and analyze the off-chip memory bandwidth and computation required. In particular, given the limitations of modern memory technology, we identify the HE parameter sets that are efficient for FHE acceleration. Based on the insights gained from our analysis, we propose BTS, which effectively exploits the parallelism innate in HE operations by arranging a massive number of processing elements in a grid. We present the design and microarchitecture of BTS, including a network-on-chip design that exploits a deterministic communication pattern. BTS shows 5,556x and 1,306x improved execution time on ResNet-20 and logistic regression over a CPU, with a chip area of 373.6mm^2 and up to 163.2W of power.",True,True,"Kim, Sangpyo and Kim, Jongmin and Kim, Michael Jaemin and Jung, Wonkyung and Kim, John and Rhu, Minsoo and Ahn, Jung Ho",2022.0,,https://doi.org/10.1145/3470496.3527415,10.1145/3470496.3527415,,BTS: An Accelerator for Bootstrappable Fully Homomorphic Encryption,BTS: An Accelerator for Bootstrappable Fully Homomorphic Encryption,http://arxiv.org/pdf/2112.15479v2,"Homomorphic encryption (HE) enables the secure offloading of computations to the cloud by providing computation on encrypted data (ciphertexts). HE is based on noisy encryption schemes in which noise accumulates as more computations are applied to the data. The limited number of operations applicable to the data prevents practical applications from exploiting HE. Bootstrapping enables an unlimited number of operations or fully HE (FHE) by refreshing the ciphertext. Unfortunately, bootstrapping requires a significant amount of additional computation and memory bandwidth as well. Prior works have proposed hardware accelerators for computation primitives of FHE. However, to the best of our knowledge, this is the first to propose a hardware FHE accelerator that supports bootstrapping as a first-class citizen. In particular, we propose BTS - Bootstrappable, Technologydriven, Secure accelerator architecture for FHE. We identify the challenges of supporting bootstrapping in the accelerator and analyze the off-chip memory bandwidth and computation required. In particular, given the limitations of modern memory technology, we identify the HE parameter sets that are efficient for FHE acceleration. Based on the insights gained from our analysis, we propose BTS, which effectively exploits the parallelism innate in HE operations by arranging a massive number of processing elements in a grid. We present the design and microarchitecture of BTS, including a network-on-chip design that exploits a deterministic communication pattern. BTS shows 5,556x and 1,306x improved execution time on ResNet-20 and logistic regression over a CPU, with a chip area of 373.6mm^2 and up to 163.2W of power." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,ark,\cite{ark},"ARK: Fully Homomorphic Encryption Accelerator with Runtime Data Generation and Inter-Operation Key Reuse",http://arxiv.org/abs/2205.00922v3,"Homomorphic Encryption (HE) is one of the most promising post-quantum cryptographic schemes that enable privacy-preserving computation on servers. However, noise accumulates as we perform operations on HE-encrypted data, restricting the number of possible operations. Fully HE (FHE) removes this restriction by introducing the bootstrapping operation, which refreshes the data; however, FHE schemes are highly memory-bound. Bootstrapping, in particular, requires loading GBs of evaluation keys and plaintexts from off-chip memory, which makes FHE acceleration fundamentally bottlenecked by the off-chip memory bandwidth. In this paper, we propose ARK, an Accelerator for FHE with Runtime data generation and inter-operation Key reuse. ARK enables practical FHE workloads with a novel algorithm-architecture co-design to accelerate bootstrapping. We first eliminate the off-chip memory bandwidth bottleneck through runtime data generation and inter-operation key reuse. This approach enables ARK to fully exploit on-chip memory by substantially reducing the size of the working set. On top of such algorithmic enhancements, we build ARK microarchitecture that minimizes on-chip data movement through an efficient, alternating data distribution policy based on the data access patterns and a streamlined dataflow organization of the tailored functional units -- including base conversion, number-theoretic transform, and automorphism units. Overall, our co-design effectively handles the heavy computation and data movement overheads of FHE, drastically reducing the cost of HE operations, including bootstrapping.",True,True,"Kim, Jongmin and Lee, Gwangho and Kim, Sangpyo and Sohn, Gina and Rhu, Minsoo and Kim, John and Ahn, Jung Ho",2022.0,,,10.1109/MICRO56248.2022.00086,,"ARK: Fully Homomorphic Encryption Accelerator with Runtime Data Generation and Inter-Operation Key Reuse",[PDF] ARK: Fully Homomorphic Encryption Accelerator with Runtime Data ...,https://scale.snu.ac.kr/papers/2022-10-Conference-MICRO-ARK.pdf,Missing: 13/08/2025 Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,sharp,\cite{sharp},SHARP: A Short-Word Hierarchical Accelerator for Robust and Practical Fully Homomorphic Encryption,,,True,False,"Kim, Jongmin and Kim, Sangpyo and Choi, Jaewan and Park, Jaiyoung and Kim, Donghwan and Ahn, Jung Ho",2023.0,,https://doi.org/10.1145/3579371.3589053,10.1145/3579371.3589053,,SHARP: A Short-Word Hierarchical Accelerator for Robust and Practical Fully Homomorphic Encryption,SHARP: A Short-Word Hierarchical Accelerator for Robust and...,https://openreview.net/forum?id=TnqJcW7psF,"We propose SHARP, a robust and practical accelerator for FHE. We analyze the implications of various hardware design choices on the" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,f1,\cite{f1},"F1: A Fast and Programmable Accelerator for Fully Homomorphic Encryption (Extended Version)",http://arxiv.org/abs/2109.05371v2,"Fully Homomorphic Encryption (FHE) allows computing on encrypted data, enabling secure offloading of computation to untrusted serves. Though it provides ideal security, FHE is expensive when executed in software, 4 to 5 orders of magnitude slower than computing on unencrypted data. These overheads are a major barrier to FHE's widespread adoption. We present F1, the first FHE accelerator that is programmable, i.e., capable of executing full FHE programs. F1 builds on an in-depth architectural analysis of the characteristics of FHE computations that reveals acceleration opportunities. F1 is a wide-vector processor with novel functional units deeply specialized to FHE primitives, such as modular arithmetic, number-theoretic transforms, and structured permutations. This organization provides so much compute throughput that data movement becomes the bottleneck. Thus, F1 is primarily designed to minimize data movement. The F1 hardware provides an explicitly managed memory hierarchy and mechanisms to decouple data movement from execution. A novel compiler leverages these mechanisms to maximize reuse and schedule off-chip and on-chip data movement. We evaluate F1 using cycle-accurate simulations and RTL synthesis. F1 is the first system to accelerate complete FHE programs and outperforms state-of-the-art software implementations by gmean 5400x and by up to 17000x. These speedups counter most of FHE's overheads and enable new applications, like real-time private deep learning in the cloud.",True,True,"Samardzic, Nikola and Feldmann, Axel and Krastev, Aleksandar and Devadas, Srinivas and Dreslinski, Ronald and Peikert, Christopher and Sanchez, Daniel",2021.0,,https://doi.org/10.1145/3466752.3480070,10.1145/3466752.3480070,,"F1: A Fast and Programmable Accelerator for Fully Homomorphic Encryption (Extended Version)",F1: A Fast and Programmable Accelerator for Fully Homomorphic ...,https://arxiv.org/abs/2109.05371,F1 is the first system to accelerate complete FHE programs and outperforms state-of-the-art software implementations by gmean 5400x and by up to 17000x. Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,clake,\cite{clake},CraterLake: A Hardware Accelerator for Efficient Unbounded Computation on Encrypted Data,,,True,False,"Samardzic, Nikola and Feldmann, Axel and Krastev, Aleksandar and Manohar, Nathan and Genise, Nicholas and Devadas, Srinivas and Eldefrawy, Karim and Peikert, Chris and Sanchez, Daniel",2022.0,,https://doi.org/10.1145/3470496.3527393,10.1145/3470496.3527393,,CraterLake: A Hardware Accelerator for Efficient Unbounded Computation on Encrypted Data,CraterLake: a hardware accelerator for efficien,https://dl.acm.org/doi/pdf/10.1145/3470496.3527393,"by N Samardzic · 2022 · Cited by 223 — Fully homomorphic encryption (FHE) is a special type of encryp- tion scheme that enables computing on encrypted data directly, with- out decrypting it. FHE" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,rpu,\cite{rpu},RPU: The Ring Processing Unit,http://arxiv.org/abs/2303.17118v3,"Ring-Learning-with-Errors (RLWE) has emerged as the foundation of many important techniques for improving security and privacy, including homomorphic encryption and post-quantum cryptography. While promising, these techniques have received limited use due to their extreme overheads of running on general-purpose machines. In this paper, we present a novel vector Instruction Set Architecture (ISA) and microarchitecture for accelerating the ring-based computations of RLWE. The ISA, named B512, is developed to meet the needs of ring processing workloads while balancing high-performance and general-purpose programming support. Having an ISA rather than fixed hardware facilitates continued software improvement post-fabrication and the ability to support the evolving workloads. We then propose the ring processing unit (RPU), a high-performance, modular implementation of B512. The RPU has native large word modular arithmetic support, capabilities for very wide parallel processing, and a large capacity high-bandwidth scratchpad to meet the needs of ring processing. We address the challenges of programming the RPU using a newly developed SPIRAL backend. A configurable simulator is built to characterize design tradeoffs and quantify performance. The best performing design was implemented in RTL and used to validate simulator performance. In addition to our characterization, we show that a RPU using 20.5mm2 of GF 12nm can provide a speedup of 1485x over a CPU running a 64k, 128-bit NTT, a core RLWE workload",True,True,"Soni, Deepraj and Neda, Negar and Zhang, Naifeng and Reynwar, Benedict and Gamil, Homer and Heyman, Benjamin and Nabeel, Mohammed and Badawi, Ahmad Al and Polyakov, Yuriy and Canida, Kellie and Pedram, Massoud and Maniatakos, Michail and Cousins, David Bruce and Franchetti, Franz and French, Matthew and Schmidt, Andrew and Reagen, Brandon",2023.0,,,10.1109/ISPASS57527.2023.00034,,RPU: The Ring Processing Unit,RPU: The Ring Processing Unit,http://arxiv.org/pdf/2303.17118v3,"Ring-Learning-with-Errors (RLWE) has emerged as the foundation of many important techniques for improving security and privacy, including homomorphic encryption and post-quantum cryptography. While promising, these techniques have received limited use due to their extreme overheads of running on general-purpose machines. In this paper, we present a novel vector Instruction Set Architecture (ISA) and microarchitecture for accelerating the ring-based computations of RLWE. The ISA, named B512, is developed to meet the needs of ring processing workloads while balancing high-performance and general-purpose programming support. Having an ISA rather than fixed hardware facilitates continued software improvement post-fabrication and the ability to support the evolving workloads. We then propose the ring processing unit (RPU), a high-performance, modular implementation of B512. The RPU has native large word modular arithmetic support, capabilities for very wide parallel processing, and a large capacity high-bandwidth scratchpad to meet the needs of ring processing. We address the challenges of programming the RPU using a newly developed SPIRAL backend. A configurable simulator is built to characterize design tradeoffs and quantify performance. The best performing design was implemented in RTL and used to validate simulator performance. In addition to our characterization, we show that a RPU using 20.5mm2 of GF 12nm can provide a speedup of 1485x over a CPU running a 64k, 128-bit NTT, a core RLWE workload" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,haac,\cite{haac},HAAC: A Hardware-Software Co-Design to Accelerate Garbled Circuits,http://arxiv.org/abs/2211.13324v3,"Privacy and security have rapidly emerged as priorities in system design. One powerful solution for providing both is privacy-preserving computation, where functions are computed directly on encrypted data and control can be provided over how data is used. Garbled circuits (GCs) are a PPC technology that provide both confidential computing and control over how data is used. The challenge is that they incur significant performance overheads compared to plaintext. This paper proposes a novel garbled circuits accelerator and compiler, named HAAC, to mitigate performance overheads and make privacy-preserving computation more practical. HAAC is a hardware-software co-design. GCs are exemplars of co-design as programs are completely known at compile time, i.e., all dependence, memory accesses, and control flow are fixed. The design philosophy of HAAC is to keep hardware simple and efficient, maximizing area devoted to our proposed custom execution units and other circuits essential for high performance (e.g., on-chip storage). The compiler can leverage its program understanding to realize hardware's performance potential by generating effective instruction schedules, data layouts, and orchestrating off-chip events. In taking this approach we can achieve ASIC performance/efficiency without sacrificing generality. Insights of our approach include how co-design enables expressing arbitrary GCs programs as streams, which simplifies hardware and enables complete memory-compute decoupling, and the development of a scratchpad that captures data reuse by tracking program execution, eliminating the need for costly hardware managed caches and tagging logic. We evaluate HAAC with VIP-Bench and achieve an average speedup of 589$\times$ with DDR4 (2,627$\times$ with HBM2) in 4.3mm$^2$ of area.",True,True,"Mo, Jianqiao and Gopinath, Jayanth and Reagen, Brandon",2023.0,,https://doi.org/10.1145/3579371.3589045,10.1145/3579371.3589045,,HAAC: A Hardware-Software Co-Design to Accelerate Garbled Circuits,[PDF] Hardware-Software Co-Design to Accelerate Garble Circuits,https://license.tov.med.nyu.edu/product/hardware-software-co-design-to-accelerate-garble-circuits/print,"Novel in its approach, HAAC is a garbled circuit accelerator and compiler that enhances privacy- preserving computation, offering a practical solution to data" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,karthik,\cite{karthik},Characterizing and Optimizing End-to-End Systems for Private Inference,http://arxiv.org/abs/2207.07177v2,"In two-party machine learning prediction services, the client's goal is to query a remote server's trained machine learning model to perform neural network inference in some application domain. However, sensitive information can be obtained during this process by either the client or the server, leading to potential collection, unauthorized secondary use, and inappropriate access to personal information. These security concerns have given rise to Private Inference (PI), in which both the client's personal data and the server's trained model are kept confidential. State-of-the-art PI protocols consist of a pre-processing or offline phase and an online phase that combine several cryptographic primitives: Homomorphic Encryption (HE), Secret Sharing (SS), Garbled Circuits (GC), and Oblivious Transfer (OT). Despite the need and recent performance improvements, PI remains largely arcane today and is too slow for practical use. This paper addresses PI's shortcomings with a detailed characterization of a standard high-performance protocol to build foundational knowledge and intuition in the systems community. Our characterization pinpoints all sources of inefficiency -- compute, communication, and storage. In contrast to prior work, we consider inference request arrival rates rather than studying individual inferences in isolation and we find that the pre-processing phase cannot be ignored and is often incurred online as there is insufficient downtime to hide pre-compute latency. Finally, we leverage insights from our characterization and propose three optimizations to address the storage (Client-Garbler), computation (layer-parallel HE), and communication (wireless slot allocation) overheads. Compared to the state-of-the-art PI protocol, these optimizations provide a total PI speedup of 1.8$\times$ with the ability to sustain inference requests up to a 2.24$\times$ greater rate.",True,True,"Garimella, Karthik and Ghodsi, Zahra and Jha, Nandan Kumar and Garg, Siddharth and Reagen, Brandon",2023.0,,https://doi.org/10.1145/3582016.3582065,10.1145/3582016.3582065,,Characterizing and Optimizing End-to-End Systems for Private Inference,Characterizing and Optimizing End-to- ...,https://arxiv.org/pdf/2207.07177,"by K Garimella · 2022 · Cited by 32 — This section provides a primer on the cryptographic primitives used to achieve private inference: homomorphic encryption (HE), secret sharing (SS), garbled" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,ciflow,\cite{ciflow},"CiFlow: Dataflow Analysis and Optimization of Key Switching for Homomorphic Encryption",http://arxiv.org/abs/2311.01598v4,"Homomorphic encryption (HE) is a privacy-preserving computation technique that enables computation on encrypted data. Today, the potential of HE remains largely unrealized as it is impractically slow, preventing it from being used in real applications. A major computational bottleneck in HE is the key-switching operation, accounting for approximately 70% of the overall HE execution time and involving a large amount of data for inputs, intermediates, and keys. Prior research has focused on hardware accelerators to improve HE performance, typically featuring large on-chip SRAMs and high off-chip bandwidth to deal with large scale data. In this paper, we present a novel approach to improve key-switching performance by rigorously analyzing its dataflow. Our primary goal is to optimize data reuse with limited on-chip memory to minimize off-chip data movement. We introduce three distinct dataflows: Max-Parallel (MP), Digit-Centric (DC), and Output-Centric (OC), each with unique scheduling approaches for key-switching computations. Through our analysis, we show how our proposed Output-Centric technique can effectively reuse data by significantly lowering the intermediate key-switching working set and alleviating the need for massive off-chip bandwidth. We thoroughly evaluate the three dataflows using the RPU, a recently published vector processor tailored for ring processing algorithms, which includes HE. This evaluation considers sweeps of bandwidth and computational throughput, and whether keys are buffered on-chip or streamed. With OC, we demonstrate up to 4.16x speedup over the MP dataflow and show how OC can save 12.25x on-chip SRAM by streaming keys for minimal performance penalty.",True,True,"Neda, Negar and Ebel, Austin and Reynwar, Benedict and Reagen, Brandon",2024.0,,,10.1109/ISPASS61541.2024.00016,,"CiFlow: Dataflow Analysis and Optimization of Key Switching for Homomorphic Encryption",BUAA-CI-LAB/Literatures-on-Homomorphic-Encryption - GitHub,https://github.com/BUAA-CI-LAB/Literatures-on-Homomorphic-Encryption,"[ArXiv 2023] [Key Switching] CiFlow: Dataflow Analysis and Optimization of Key Switching for Homomorphic Encryption. Neda N, Ebel A, Reynwar B, et al. [Paper]." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,priorMSM,\cite{priorMSM},PriorMSM: An Efficient Acceleration Architecture for Multi-Scalar Multiplication,,,True,False,"Liu, Changxu and Zhou, Hao and Dai, Patrick and Shang, Li and Yang, Fan",2024.0,,,,ACM Transactions on Design Automation of Electronic Systems,PriorMSM: An Efficient Acceleration Architecture for Multi-Scalar Multiplication,PriorMSM: An Efficient Acceleration Architecture for Multi- ...,https://dl.acm.org/doi/full/10.1145/3678006,by C Liu · 2024 · Cited by 4 — Multi-Scalar Multiplication (MSM) is a computationally intensive task that operates on elliptic curves based on GF(P). Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,distMSM,\cite{distMSM},Accelerating Multi-Scalar Multiplication for Efficient Zero Knowledge Proofs with Multi-GPU Systems,,,True,False,"Ji, Zhuoran and Zhang, Zhiyuan and Xu, Jiming and Ju, Lei",2024.0,,https://doi.org/10.1145/3620666.3651364,10.1145/3620666.3651364,,Accelerating Multi-Scalar Multiplication for Efficient Zero Knowledge Proofs with Multi-GPU Systems,Accelerating Multi-Scalar Multiplication for Efficient Zero Knowledge ...,https://dl.acm.org/doi/10.1145/3620666.3651364,"* Ji Z Zhao J Gao P Yin X Ju L Eeckhout L Smaragdakis G Liang K Sampson A Kim M Rossbach C(2025)Accelerating Number Theoretic Transform with Multi-GPU Systems for Efficient Zero Knowledge Proof Proceedings of the 30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 1 10.1145/3669940.3707241(1-14)Online publication date: 30-Mar-2025https://dl.acm.org/doi/10.1145/3669940.3707241 * Ji Z Zhao J Gao P Yin X Ju L Eeckhout L Smaragdakis G Liang K Sampson A Kim M Rossbach C(2025)Accelerating Number Theoretic Transform with Multi-GPU Systems for Efficient Zero Knowledge Proof Proceedings of the 30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 1 10.1145/3669940.3707241(1-14)Online publication date: 30-Mar-2025https://dl.acm.org/doi/10.1145/3669940.3707241 " Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,cuZK,\cite{cuZK},cuZK: Accelerating Zero-Knowledge Proof with A Faster Parallel Multi-Scalar Multiplication Algorithm on GPUs,,,True,False,Tao Lu and Chengkun Wei and Ruijing Yu and Chaochao Chen and Wenjing Fang and Lei Wang and Zeke Wang and Wenzhi Chen,2022.0,,https://eprint.iacr.org/2022/1321,,,cuZK: Accelerating Zero-Knowledge Proof with A Faster Parallel Multi-Scalar Multiplication Algorithm on GPUs,cuZK: Accelerating Zero-Knowledge Proof with A Faster Parallel ...,https://hgpu.org/?p=27333,"Therefore, this paper presents cuZK, an efficient GPU implementation of ZKP with the following three optimizations to achieve higher performance" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,gypso,\cite{gypso},Gypsophila: A Scalable and Bandwidth-Optimized Multi-Scalar Multiplication Architecture,,,True,False,"Liu, Changxu and Zhou, Hao and Yang, Lan and Xu, Jiamin and Dai, Patrick and Yang, Fan",2024.0,,https://doi.org/10.1145/3649329.3658259,10.1145/3649329.3658259,,Gypsophila: A Scalable and Bandwidth-Optimized Multi-Scalar Multiplication Architecture,Presenter - DAC 2024,https://61dac.conference-program.com/presenter/?uid=8865008340133456017,"Gypsophila: A Scalable and Bandwidth-Optimized Multi-Scalar Multiplication Architecture ... Hardware Security: Primitives, Architecture, Design & Test. Back" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,reZK,\cite{reZK},ReZK: A Highly Reconfigurable Accelerator for Zero-Knowledge Proof,,,True,False,"Zhou, Hao and Liu, Changxu and Yang, Lan and Shang, Li and Yang, Fan",2024.0,,,,IEEE Transactions on Circuits and Systems I: Regular Papers,ReZK: A Highly Reconfigurable Accelerator for Zero-Knowledge Proof,ReZK: A Highly Reconfigurable Accelerator for Zero- ...,https://ieeexplore.ieee.org/document/10714365/,"by H Zhou · 2024 · Cited by 2 — In this paper, we propose a highly reconfigurable accelerator ReZK to accelerate ZKP proof generation phase, focusing on NTT/INTT and MSM." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,myotosis,\cite{myotosis},Myosotis: An Efficiently Pipelined and Parameterized Multi-Scalar Multiplication Architecture via Data Sharing,,,True,False,"Liu, Changxu and Zhou, Hao and Yang, Lan and Wu, Zheng and Dai, Patrick and Li, Yinlong and Wu, Shiyong and Yang, Fan",2024.0,,,10.1109/TCAD.2024.3524364,IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,Myosotis: An Efficiently Pipelined and Parameterized Multi-Scalar Multiplication Architecture via Data Sharing,"Fan YANG | Fudan University, Shanghai | Research profile",https://www.researchgate.net/profile/Fan-Yang-398,Myosotis : An Efficiently Pipelined and Parameterized Multi - Scalar Multiplication Architecture via Data Sharing . Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,MSMAC,\cite{MSMAC},MSMAC: Accelerating Multi-Scalar Multiplication for Zero-Knowledge Proof,,,True,False,"Qiu, Pengcheng and Wu, Guiming and Chu, Tingqiang and Wei, Changzheng and Luo, Runzhou and Yan, Ying and Wang, Wei and Zhang, Hui",2024.0,,https://doi.org/10.1145/3649329.3655672,10.1145/3649329.3655672,,MSMAC: Accelerating Multi-Scalar Multiplication for Zero-Knowledge Proof,Accelerating Multi-Scalar Multiplication for Zero-Knowledge Proof,https://eprint.iacr.org/2024/1246,"In this paper, we propose MSMAC, an FPGA accelerator for large-scale MSM. MSMAC adopts a specially designed Instruction Set Architecture (ISA) for MSM." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,intel_zkp,\cite{intel_zkp},if-ZKP: Intel FPGA-Based Acceleration of Zero Knowledge Proofs,,,True,False,Shahzad Ahmad Butt and Benjamin Reynolds and Veeraraghavan Ramamurthy and Xiao Xiao and Pohrong Chu and Setareh Sharifian and Sergey Gribok and Bogdan Pasca,2024.0,,https://arxiv.org/abs/2412.12481,,,if-ZKP: Intel FPGA-Based Acceleration of Zero Knowledge Proofs,if-ZKP: Intel FPGA-Based Acceleration of Zero Knowledge ...,https://www.computer.org/csdl/proceedings-article/fccm/2024/724300a212/1ZWbp6XcENO,by SA Butt · 2024 · Cited by 1 — This paper presents a novel scalable FPGA architecture for accelerating the zk-SNARK prover's compute-intensive multi-scalar multiplication (MSM) operation. Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,elastic_msm,\cite{elastic_msm},"Elastic {MSM}: A Fast, Elastic and Modular Preprocessing Technique for Multi-Scalar Multiplication Algorithm on {GPUs}",,,True,False,Xudong Zhu and Haoqi He and Zhengbang Yang and Yi Deng and Lutan Zhao and Rui Hou,2024.0,,https://eprint.iacr.org/2024/057,,,"Elastic {MSM}: A Fast, Elastic and Modular Preprocessing Technique for Multi-Scalar Multiplication Algorithm on {GPUs}",Elastic MSM a Fast Elastic and Modular Preprocessing Technique ...,http://hourui-arch.net/publication/elastic-msm-a-fast-elastic-and-modular-preprocessing-technique-for-multi-scalar-multiplication-algorithm-on-gpus/,"Elastic MSM a Fast Elastic and Modular Preprocessing Technique for Multi Scalar Multiplication Algorithm on GPUs. Xudong Zhu , Haoqi He" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,tches_ntt_msm,\cite{tches_ntt_msm},A High-performance NTT/MSM Accelerator for Zero-knowledge Proof Using Load-balanced Fully-pipelined Montgomery Multiplier,,,True,False,"Chen, Xiangren and Yang, Bohan and Zhu, Wenping and Wang, Hanning and Tao, Qichao and Yin, Shuying and Zhu, Min and Wei, Shaojun and Liu, Leibo",2024.0,Dec.,https://tches.iacr.org/index.php/TCHES/article/view/11930,10.46586/tches.v2025.i1.275-313,IACR Transactions on Cryptographic Hardware and Embedded Systems,A High-performance NTT/MSM Accelerator for Zero-knowledge Proof Using Load-balanced Fully-pipelined Montgomery Multiplier,View of A High-performance NTT/MSM Accelerator for Zero ...,https://tches.iacr.org/index.php/TCHES/article/view/11930/11789,by X Chen · 2025 · Cited by 1 — A High-performance NTT/MSM Accelerator for Zero-knowledge Proof Using Load-balanced Fully Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,sam,\cite{sam},SAM: A Scalable Accelerator for Number Theoretic Transform Using Multi-Dimensional Decomposition,,,True,False,"Wang, Cheng and Gao, Mingyu",2023.0,,,10.1109/ICCAD57390.2023.10323744,,SAM: A Scalable Accelerator for Number Theoretic Transform Using Multi-Dimensional Decomposition,Accelerating Number Theoretic Transformations for Bootstrappable ...,https://www.researchgate.net/publication/347068332_Accelerating_Number_Theoretic_Transformations_for_Bootstrappable_Homomorphic_Encryption_on_GPUs,SAM: A Scalable Accelerator for Number Theoretic Transform Using Multi-Dimensional Decomposition. Conference Paper. Oct 2023. Cheng Wang · Mingyu Gao. Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,legozk,\cite{legozk},LegoZK: a Dynamically Reconfigurable Accelerator for Zero Knowledge Proof,,,True,False,"Yang, Zhengbang and Zhao, Lutan and Li, Peinan and Liu, Han and Li, Kai and Zhao, Boyan and Meng, Dan and Hou, Rui",,,,,,LegoZK: a Dynamically Reconfigurable Accelerator for Zero Knowledge Proof,LegoZK: A Dynamically Reconfigurable Accelerator for ...,https://ieeexplore.ieee.org/document/10946728/,"by Z Yang · 2025 · Cited by 1 — We propose LegoZK, a dynamically reconfigurable hardware accelerator for ZKP. LegoZK employs finite field arithmetic units (FAUs) as its fundamental components." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,unizk,\cite{unizk},UniZK: Accelerating Zero-Knowledge Proof with Unified Hardware and Flexible Kernel Mapping,,,True,False,"Wang, Cheng and Gao, Mingyu",2025.0,,https://doi.org/10.1145/3669940.3707228,10.1145/3669940.3707228,,UniZK: Accelerating Zero-Knowledge Proof with Unified Hardware and Flexible Kernel Mapping,UniZK: Accelerating Zero-Knowledge Proof with Unified Hardware ...,https://dl.acm.org/doi/10.1145/3669940.3707228,"UniZK: Accelerating Zero-Knowledge Proof with Unified Hardware and Flexible Kernel Mapping | Proceedings of the 30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 1 ### GZKP: A GPU Accelerated Zero-Knowledge Proof SystemASPLOS 2023: Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2 Zero-knowledge proof (ZKP) is a cryptographic protocol that allows one party to prove the correctness of a statement to another party without revealing any information beyond the correctness of the statement itself." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,graz,\cite{graz},Chiplet-Based Techniques for Scalable and Memory-Aware Multi-Scalar Multiplication,,,True,False,Florian Hirner and Florian Krieger and Sujoy Sinha Roy,2025.0,,https://eprint.iacr.org/2025/252,,,Chiplet-Based Techniques for Scalable and Memory-Aware Multi-Scalar Multiplication,Chiplet-Based Techniques for Scalable and Memory-Aware Multi ...,https://eprint.iacr.org/2025/252,"This paper presents a high-performance architecture for accelerating Multi-Scalar Multiplication (MSM) on ASIC platforms, targeting cryptographic applications" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,batchzk,\cite{batchzk},{BatchZK}: A Fully Pipelined {GPU}-Accelerated System for Batch Generation of Zero-Knowledge Proofs,,,True,False,Tao Lu and Yuxun Chen and Zonghui Wang and Xiaohang Wang and Wenzhi Chen and Jiaheng Zhang,2024.0,,https://eprint.iacr.org/2024/1862,,,{BatchZK}: A Fully Pipelined {GPU}-Accelerated System for Batch Generation of Zero-Knowledge Proofs,‪Tao Lu‬ - ‪Google Scholar‬,https://scholar.google.com/citations?user=AmWNnFwAAAAJ&hl=en,"BatchZK: A Fully Pipelined GPU-Accelerated System for Batch Generation of Zero-Knowledge Proofs. T Lu, Y Chen, Z Wang, X Wang, W Chen, J Zhang. the ACM" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,nocap,\cite{nocap},Accelerating Zero-Knowledge Proofs Through Hardware-Algorithm Co-Design,,,True,False,Nikola Samardzic and Simon Langowski and Srinivas Devadas and Daniel Sanchez,2024.0,,,,,Accelerating Zero-Knowledge Proofs Through Hardware-Algorithm Co-Design,Accelerating Zero-Knowledge Proofs Through Hardware-Algorithm ...,https://ieeexplore.ieee.org/document/10764644/,"We present a novel accelerator, NoCap, that leverages hardware-algorithm co-design to achieve transformative speedups." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,gottahashemall,\cite{gottahashemall},"Gotta Hash 'Em All! Speeding Up Hash Functions for Zero-Knowledge Proof Applications",http://arxiv.org/abs/2501.18780v1,"Collision-resistant cryptographic hash functions (CRHs) are crucial for security in modern systems but are optimized for standard CPUs. While heavily used in zero-knowledge proof (ZKP) applications, traditional CRHs are inefficient in the ZK domain. ZK-friendly hashes have been developed but struggle on consumer hardware due to a lack of specialized ZK-specific hardware. To address this, we present HashEmAll, a novel collection of FPGA-based realizations of three ZK-friendly hash functions: Griffin, Rescue-Prime, and Reinforced Concrete. Each hash offers different optimization focuses, allowing users to choose based on the constraints of their applications. Through our ZK-optimized arithmetic functions on reconfigurable hardware, HashEmAll outperforms CPU implementations by up to $23\times$ with lower power consumption and compatibility with accessible FPGAs.",True,True,Nojan Sheybani and Tengkai Gong and Anees Ahmed and Nges Brian Njungle and Michel Kinsy and Farinaz Koushanfar,2025.0,,https://arxiv.org/abs/2501.18780,,,"Gotta Hash 'Em All! Speeding Up Hash Functions for Zero-Knowledge Proof Applications",[2501.18780] Gotta Hash 'Em All! Speeding Up Hash Functions for ...,https://arxiv.org/abs/2501.18780,"To address this, we present HashEmAll, a novel collection of FPGA-based realizations of three ZK-friendly hash functions: Griffin, Rescue-Prime," Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,amaze,\cite{amaze},"AMAZE: Accelerated MiMC Hardware Architecture for Zero-Knowledge Applications on the Edge",http://arxiv.org/abs/2411.06350v1,"Collision-resistant, cryptographic hash (CRH) functions have long been an integral part of providing security and privacy in modern systems. Certain constructions of zero-knowledge proof (ZKP) protocols aim to utilize CRH functions to perform cryptographic hashing. Standard CRH functions, such as SHA2, are inefficient when employed in the ZKP domain, thus calling for ZK-friendly hashes, which are CRH functions built with ZKP efficiency in mind. The most mature ZK-friendly hash, MiMC, presents a block cipher and hash function with a simple algebraic structure that is well-suited, due to its achieved security and low complexity, for ZKP applications. Although ZK-friendly hashes have improved the performance of ZKP generation in software, the underlying computation of ZKPs, including CRH functions, must be optimized on hardware to enable practical applications. The challenge we address in this work is determining how to efficiently incorporate ZK-friendly hash functions, such as MiMC, into hardware accelerators, thus enabling more practical applications. In this work, we introduce AMAZE, a highly hardware-optimized open-source framework for computing the MiMC block cipher and hash function. Our solution has been primarily directed at resource-constrained edge devices; consequently, we provide several implementations of MiMC with varying power, resource, and latency profiles. Our extensive evaluations show that the AMAZE-powered implementation of MiMC outperforms standard CPU implementations by more than 13$\times$. In all settings, AMAZE enables efficient ZK-friendly hashing on resource-constrained devices. Finally, we highlight AMAZE's underlying open-source arithmetic backend as part of our end-to-end design, thus allowing developers to utilize the AMAZE framework for custom ZKP applications.",True,True,Anees Ahmed and Nojan Sheybani and Davi Moreno and Nges Brian Njungle and Tengkai Gong and Michel Kinsy and Farinaz Koushanfar,2024.0,,https://arxiv.org/abs/2411.06350,,,"AMAZE: Accelerated MiMC Hardware Architecture for Zero-Knowledge Applications on the Edge",AMAZE: Accelerated MiMC Hardware Architecture for Zero- ...,http://www.arxiv.org/pdf/2411.06350,"by A Ahmed · 2024 · Cited by 8 — We propose AMAZE, a highly-optimized hardware architec- ture framework for computing the MiMC block cipher and hash function, a core operation in zero-knowledge" Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,gzkp,\cite{gzkp},GZKP: A GPU Accelerated Zero-Knowledge Proof System,,,True,False,"Ma, Weiliang and Xiong, Qian and Shi, Xuanhua and Ma, Xiaosong and Jin, Hai and Kuang, Haozhao and Gao, Mingyu and Zhang, Ye and Shen, Haichen and Hu, Weifang",2023.0,,https://doi.org/10.1145/3575693.3575711,10.1145/3575693.3575711,,GZKP: A GPU Accelerated Zero-Knowledge Proof System,GZKP: A GPU Accelerated Zero-Knowledge Proof System,https://dl.acm.org/doi/10.1145/3575693.3575711,"We develop GZKP, a GPU accelerated zero-knowledge proof system that supports different levels of security requirements and brings significant speedup." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,szkp,\cite{szkp},SZKP: A Scalable Accelerator Architecture for Zero-Knowledge Proofs,http://arxiv.org/abs/2408.05890v1,"Zero-Knowledge Proofs (ZKPs) are an emergent paradigm in verifiable computing. In the context of applications like cloud computing, ZKPs can be used by a client (called the verifier) to verify the service provider (called the prover) is in fact performing the correct computation based on a public input. A recently prominent variant of ZKPs is zkSNARKs, generating succinct proofs that can be rapidly verified by the end user. However, proof generation itself is very time consuming per transaction. Two key primitives in proof generation are the Number Theoretic Transform (NTT) and Multi-scalar Multiplication (MSM). These primitives are prime candidates for hardware acceleration, and prior works have looked at GPU implementations and custom RTL. However, both algorithms involve complex dataflow patterns -- standard NTTs have irregular memory accesses for butterfly computations from stage to stage, and MSMs using Pippenger's algorithm have data-dependent memory accesses for partial sum calculations. We present SZKP, a scalable accelerator framework that is the first ASIC to accelerate an entire proof on-chip by leveraging structured dataflows for both NTTs and MSMs. SZKP achieves conservative full-proof speedups of over 400$\times$, 3$\times$, and 12$\times$ over CPU, ASIC, and GPU implementations.",True,True,"Daftardar, Alhad and Reagen, Brandon and Garg, Siddharth",2024.0,,,,,SZKP: A Scalable Accelerator Architecture for Zero-Knowledge Proofs,SZKP: A Scalable Accelerator Architecture for Zero-Knowledge Proofs,https://ieeexplore.ieee.org/iel8/10807299/10807300/10807301.pdf,"We present SZKP, a scalable accelerator framework that is the first ASIC to accelerate an entire proof on-chip by leveraging structured dataflows for both NTTs." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,pipezk,\cite{pipezk},PipeZK: Accelerating Zero-Knowledge Proof with a Pipelined Architecture,,,True,False,Ye Zhang and Shuo Wang and Xian Zhang and Jiangbin Dong and Xingzhong Mao and Fan Long and Cong Wang and Dong Zhou and Mingyu Gao and and Guangyu Sun,2021.0,,,,,PipeZK: Accelerating Zero-Knowledge Proof with a Pipelined Architecture,[PDF] PipeZK: Accelerating Zero-Knowledge Proof with a Pipelined ...,https://www.microsoft.com/en-us/research/wp-content/uploads/2021/05/isca21_pizk-60a269dbb1310.pdf,"PipeZK is an efficient pipelined accelerator for zero-knowledge proof, using two subsystems to handle intensive compute tasks." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,ceremony,\cite{ceremony},What is the ZCash Ceremony? The Complete Beginners Guide,,,True,False,,2023.0,,,,,What is the ZCash Ceremony? The Complete Beginners Guide,What is ZCash? Beginners Guide to ZEC | Should You Consider it?,https://coinbureau.com/education/what-is-zcash/,"ZCash is actually a fork of Bitcoin that occurred in October of 2016. Much like Bitcoin, it is a decentralised peer-to-peer electronic cash." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,trusted_set_up,\cite{trusted_set_up},"Zcash Nixes Trusted Setup, Enters New Era With Major Network Update",,,True,False,"Nelson, Jason",2022.0,Jun,https://decrypt.co/101762/zcash-nixes-trusted-setup-enters-new-era-with-major-network-update,,Decrypt,"Zcash Nixes Trusted Setup, Enters New Era With Major Network Update","Zcash Nixes Trusted Setup, Enters New Era With Major Network ...",https://www.aicoin.com/en/article/302565?lang=en,"The Electric Coin Company (ECC) announced today in a blog post the launch of the first major upgrade to the Zcash privacy coin network since November 2020," Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,garuda,\cite{garuda},Garuda and Pari: Faster and Smaller {SNARKs} via Equifficient Polynomial Commitments,,,True,False,Michel Dellepere and Pratyush Mishra and Alireza Shirzad,2024.0,,https://eprint.iacr.org/2024/1245,,,Garuda and Pari: Faster and Smaller {SNARKs} via Equifficient Polynomial Commitments,Faster and Smaller SNARKs via Equifficient Polynomial,https://dblp.org/rec/journals/iacr/DellepereMS24,"Michel Dellepere, Pratyush Mishra, Alireza Shirzad: Garuda and Pari: Faster and Smaller SNARKs via Equifficient Polynomial Commitments. IACR Cryptol." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,verizexe,\cite{verizexe},VeriZexe: Decentralized Private Computation with Universal Setup,,,True,False,"Alex Luoyuan Xiong and Binyi Chen and Zhenfei Zhang and Benedikt B{\""{u}}nz and Ben Fisch and Fernando Krell and Philippe Camacho",2023.0,,https://www.usenix.org/conference/usenixsecurity23/presentation/xiong,,,VeriZexe: Decentralized Private Computation with Universal Setup,Decentralized Private Computation with Universal Setup,https://www.usenix.org/system/files/sec23fall-prepub-277-xiong-alex.pdf,"by AL Xiong · Cited by 40 — VERIZEXE, a DPC scheme instantiation that supports both one-time universal system setup and efficient trans- action generation comparable to ZEXE (see Table 1),." Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,2504.06211v1,poseidon,\cite{poseidon},Poseidon: A New Hash Function for Zero-Knowledge Proof Systems,,,True,False,Lorenzo Grassi and Dmitry Khovratovich and Christian Rechberger and Arnab Roy and Markus Schofnegger,2019.0,,https://eprint.iacr.org/2019/458,,,Poseidon: A New Hash Function for Zero-Knowledge Proof Systems,A New Hash Function for Zero-Knowledge Proof Systems ...,https://autoparallel.github.io/poseidon/index.html,"The paper describes the design, implementation, and security analysis of Poseidon, highlighting its efficiency in zero-knowledge (ZK) proof systems," "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,li2022help,\cite{li2022help},Help rather than recycle: Alleviating cold startup in serverless computing through $\{$Inter-Function$\}$ container sharing,,,True,False,"Li, Zijun and Guo, Linsong and Chen, Quan and Cheng, Jiagan and Xu, Chuhao and Zeng, Deze and Song, Zhuo and Ma, Tao and Yang, Yong and Li, Chao and others",2022.0,,,,,Help rather than recycle: Alleviating cold startup in serverless computing through $\{$Inter-Function$\}$ container sharing,Help Rather Than Recycle: Alleviating Cold Startup in Serverless ...,https://www.liborui.cn/seminar/20231010/,Help Rather Than Recycle: Alleviating Cold Startup in Serverless Computing Through Inter-Function Container Sharing. Weilong Wang. Oct 10 "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,saxena2022memory,\cite{saxena2022memory},Memory deduplication for serverless computing with medes,,,True,False,"Saxena, Divyanshu and Ji, Tao and Singhvi, Arjun and Khalid, Junaid and Akella, Aditya",2022.0,,,,,Memory deduplication for serverless computing with medes,[PDF] Memory Deduplication for Serverless Computing with Medes,https://pages.cs.wisc.edu/~junaid/papers/medes.pdf,Medes leverages the fact that the warm sandboxes running on serverless platforms have a high fraction of duplication in their memory footprints. "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,chen2023s,\cite{chen2023s},S-cache: Function caching for serverless edge computing,,,True,False,"Chen, Chen and Nagel, Lars and Cui, Lin and Tso, Fung Po",2023.0,,,,,S-cache: Function caching for serverless edge computing,S-Cache: Function Caching for Serverless Edge Computing,https://www.researchgate.net/publication/370601194_S-Cache_Function_Caching_for_Serverless_Edge_Computing,S-Cache [27] investigates the problem of container placement with latency optimization in edge computing environments. A prioritybased algorithm is proposed to "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,basu2024codecrunch,\cite{basu2024codecrunch},CodeCrunch: Improving Serverless Performance via Function Compression and Cost-Aware Warmup Location Optimization,,,True,False,"Basu Roy, Rohan and Patel, Tirthak and Garg, Rohan and Tiwari, Devesh",2024.0,,,,,CodeCrunch: Improving Serverless Performance via Function Compression and Cost-Aware Warmup Location Optimization,Improving Serverless Performance via Function Compression and ...,https://www.researchgate.net/publication/379904791_CodeCrunch_Improving_Serverless_Performance_via_Function_Compression_and_Cost-Aware_Warmup_Location_Optimization,"Basu Roy et al. [25] introduced CodeCrunch, a method that improves serverless performance via function compression and cost-aware warmup location optimization," "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,pan2023sustainable,\cite{pan2023sustainable},Sustainable serverless computing with cold-start optimization and automatic workflow resource scheduling,,,True,False,"Pan, Shanxing and Zhao, Hongyu and Cai, Zinuo and Li, Dongmei and Ma, Ruhui and Guan, Haibing",2023.0,,,,IEEE Transactions on Sustainable Computing,Sustainable serverless computing with cold-start optimization and automatic workflow resource scheduling,Sustainable Serverless Computing With Cold-Start ...,https://ieeexplore.ieee.org/document/10237322,by S Pan · 2023 · Cited by 32 — Optimal resource scheduling of serverless computing has become imperative to reduce energy consumption and enable sustainable computing. However "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,bhasi2021kraken,\cite{bhasi2021kraken},Kraken: Adaptive container provisioning for deploying dynamic dags in serverless platforms,,,True,False,"Bhasi, Vivek M and Gunasekaran, Jashwant Raj and Thinakaran, Prashanth and Mishra, Cyan Subhra and Kandemir, Mahmut Taylan and Das, Chita",2021.0,,,,,Kraken: Adaptive container provisioning for deploying dynamic dags in serverless platforms,[PDF] Adaptive Container Provisioning for Deploying Dynamic DAGs in ...,https://jashwantraj92.github.io/assets/files/socc2021-paper200.pdf,"Our results show that Kraken spawns up to 76% fewer containers on average, thereby, improving container utilization and cluster-wide energy savings by up to 4×" "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,gunasekaran2020fifer,\cite{gunasekaran2020fifer},Fifer: Tackling Underutilization in the Serverless Era,http://arxiv.org/abs/2008.12819v1,"Datacenters are witnessing a rapid surge in the adoption of serverless functions for microservices-based applications. A vast majority of these microservices typically span less than a second, have strict SLO requirements, and are chained together as per the requirements of an application. The aforementioned characteristics introduce a new set of challenges, especially in terms of container provisioning and management, as the state-of-the-art resource management frameworks, employed in serverless platforms, tend to look at microservice-based applications similar to conventional monolithic applications. Hence, these frameworks suffer from microservice-agnostic scheduling and colossal container over-provisioning, especially during workload fluctuations, thereby resulting in poor resource utilization. In this work, we quantify the above shortcomings using a variety of workloads on a multi-node cluster managed by Kubernetes and Brigade serverless framework. To address them, we propose \emph{Fifer} -- an adaptive resource management framework to efficiently manage function-chains on serverless platforms. The key idea is to make \emph{Fifer} (i) utilization conscious by efficiently bin packing jobs to fewer containers using function-aware container scaling and intelligent request batching, and (ii) at the same time, SLO-compliant by proactively spawning containers to avoid cold-starts, thus minimizing the overall response latency. Combining these benefits, \emph{Fifer} improves container utilization and cluster-wide energy consumption by 4x and 31%, respectively, without compromising on SLO's, when compared to the state-of-the-art schedulers employed by serverless platforms.",True,True,"Gunasekaran, Jashwant Raj and Thinakaran, Prashanth and Chidambaram, Nachiappan and Kandemir, Mahmut T and Das, Chita R",2020.0,,,,arXiv preprint arXiv:2008.12819,Fifer: Tackling Underutilization in the Serverless Era,Fifer: Tackling Underutilization in the Serverless Era,http://arxiv.org/pdf/2008.12819v1,"Datacenters are witnessing a rapid surge in the adoption of serverless functions for microservices-based applications. A vast majority of these microservices typically span less than a second, have strict SLO requirements, and are chained together as per the requirements of an application. The aforementioned characteristics introduce a new set of challenges, especially in terms of container provisioning and management, as the state-of-the-art resource management frameworks, employed in serverless platforms, tend to look at microservice-based applications similar to conventional monolithic applications. Hence, these frameworks suffer from microservice-agnostic scheduling and colossal container over-provisioning, especially during workload fluctuations, thereby resulting in poor resource utilization. In this work, we quantify the above shortcomings using a variety of workloads on a multi-node cluster managed by Kubernetes and Brigade serverless framework. To address them, we propose \emph{Fifer} -- an adaptive resource management framework to efficiently manage function-chains on serverless platforms. The key idea is to make \emph{Fifer} (i) utilization conscious by efficiently bin packing jobs to fewer containers using function-aware container scaling and intelligent request batching, and (ii) at the same time, SLO-compliant by proactively spawning containers to avoid cold-starts, thus minimizing the overall response latency. Combining these benefits, \emph{Fifer} improves container utilization and cluster-wide energy consumption by 4x and 31%, respectively, without compromising on SLO's, when compared to the state-of-the-art schedulers employed by serverless platforms." "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,roy2022icebreaker,\cite{roy2022icebreaker},Icebreaker: Warming serverless functions better with heterogeneity,,,True,False,"Roy, Rohan Basu and Patel, Tirthak and Tiwari, Devesh",2022.0,,,,,Icebreaker: Warming serverless functions better with heterogeneity,[PDF] IceBreaker: Warming Serverless Functions Better with Heterogeneity,http://www1.ece.neu.edu/~ningfang/SimPaper/icebreaker-ASPLOS22.pdf,"IceBreaker reduces serverless function service time and keep-alive costs by using heterogeneous nodes, dynamically choosing the cost-effective node type." "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,shahrad2020serverless,\cite{shahrad2020serverless},"Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider",http://arxiv.org/abs/2003.03423v3,"Function as a Service (FaaS) has been gaining popularity as a way to deploy computations to serverless backends in the cloud. This paradigm shifts the complexity of allocating and provisioning resources to the cloud provider, which has to provide the illusion of always-available resources (i.e., fast function invocations without cold starts) at the lowest possible resource cost. Doing so requires the provider to deeply understand the characteristics of the FaaS workload. Unfortunately, there has been little to no public information on these characteristics. Thus, in this paper, we first characterize the entire production FaaS workload of Azure Functions. We show for example that most functions are invoked very infrequently, but there is an 8-order-of-magnitude range of invocation frequencies. Using observations from our characterization, we then propose a practical resource management policy that significantly reduces the number of function coldstarts,while spending fewerresources than state-of-the-practice policies.",True,True,"Shahrad, Mohammad and Fonseca, Rodrigo and Goiri, Inigo and Chaudhry, Gohar and Batum, Paul and Cooke, Jason and Laureano, Eduardo and Tresness, Colby and Russinovich, Mark and Bianchini, Ricardo",2020.0,,,,,"Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider",Characterizing and Optimizing the Serverless Workload at ...,https://www.usenix.org/system/files/atc20-shahrad.pdf,"by M Shahrad · 2020 · Cited by 879 — This paper characterizes Azure Functions' serverless workload, showing most functions are invoked infrequently, and proposes a resource" "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,sui2024pre,\cite{sui2024pre},Pre-Warming is Not Enough: Accelerating Serverless Inference With Opportunistic Pre-Loading,,,True,False,"Sui, Yifan and Yu, Hanfei and Hu, Yitao and Li, Jianxun and Wang, Hao",2024.0,,,,,Pre-Warming is Not Enough: Accelerating Serverless Inference With Opportunistic Pre-Loading,Accelerating Serverless Inference With Opportunistic Pre-Loading,https://www.researchgate.net/publication/385976201_Pre-Warming_is_Not_Enough_Accelerating_Serverless_Inference_With_Opportunistic_Pre-Loading,Pre-Warming is Not Enough: Accelerating Serverless Inference With Opportunistic Pre-Loading. November 2024. DOI:10.1145/3698038.3698509. Conference: SoCC '24 "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,ao2022faasnap,\cite{ao2022faasnap},Faasnap: Faas made fast using snapshot-based vms,,,True,False,"Ao, Lixiang and Porter, George and Voelker, Geoffrey M",2022.0,,,,,Faasnap: Faas made fast using snapshot-based vms,[PDF] FaaSnap: FaaS Made Fast Using Snapshot-based VMs,https://www.sysnet.ucsd.edu/~voelker/pubs/faasnap-eurosys22.pdf,FaaSnap is a VM snapshot-based platform that improves FaaS function cold-start performance using optimizations to reduce the cost of restoring guest VM "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,du2020catalyzer,\cite{du2020catalyzer},Catalyzer: Sub-millisecond startup for serverless computing with initialization-less booting,,,True,False,"Du, Dong and Yu, Tianyi and Xia, Yubin and Zang, Binyu and Yan, Guanglu and Qin, Chenggang and Wu, Qixuan and Chen, Haibo",2020.0,,,,,Catalyzer: Sub-millisecond startup for serverless computing with initialization-less booting,Catalyzer: Sub-millisecond Startup for Serverless ...,https://ipads.se.sjtu.edu.cn/_media/publications/catalyzer-asplos20.pdf,"by D Du · 2020 · Cited by 365 — Instead of booting from scratch, Cat- alyzer restores a virtualization-based function instance from a well-formed checkpoint image and thereby" "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,silva2020prebaking,\cite{silva2020prebaking},Prebaking functions to warm the serverless cold start,,,True,False,"Silva, Paulo and Fireman, Daniel and Pereira, Thiago Emmanuel",2020.0,,,,,Prebaking functions to warm the serverless cold start,Prebaking Functions to Warm the Serverless Cold Start,https://dl.acm.org/doi/10.1145/3423211.3425682,* Chen Y Liu B Lin W Guo Y Peng Z(2025)CASR: Optimizing cold start and resources utilization in serverless computing Future Generation Computer Systems 10.1016/j.future.2025.107851**170**(107851)Online publication date: Sep-2025https://doi.org/10.1016/j.future.2025.107851 * Chen Y Liu B Lin W Guo Y Peng Z(2025)CASR: Optimizing cold start and resources utilization in serverless computing Future Generation Computer Systems 10.1016/j.future.2025.107851**170**(107851)Online publication date: Sep-2025https://doi.org/10.1016/j.future.2025.107851 * Marcelino C Nastic S(2024)Truffle: Efficient Data Passing for Data-Intensive Serverless Workflows in the Edge-Cloud Continuum 2024 IEEE/ACM 17th International Conference on Utility and Cloud Computing (UCC)10.1109/UCC63386.2024.00017(53-62)Online publication date: 16-Dec-2024https://doi.org/10.1109/UCC63386.2024.00017 * Copik M Calotoiu A Rethy G Böhringer R Bruno R Hoefler T(2024)Process-as-a-Service: Unifying Elastic and Stateful Clouds with Serverless Processes Proceedings of the 2024 ACM Symposium on Cloud Computing 10.1145/3698038.3698567(223-242)Online publication date: 20-Nov-2024https://dl.acm.org/doi/10.1145/3698038.3698567 "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,provisionedConcurrencyAWS,\cite{provisionedConcurrencyAWS},Provisioned concurrency for lambda functions,,,True,False,,2022.0,,,,,Provisioned concurrency for lambda functions,New – Provisioned Concurrency for Lambda Functions,https://aws.amazon.com/blogs/aws/new-provisioned-concurrency-for-lambda-functions/,"New – Provisioned Concurrency for Lambda Functions | AWS News Blog When you enable Provisioned Concurrency for a function,the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. As expected for my test workload, I see a big difference in the response time of the slowest 5% of the requests (between 95% and 100%), where the function with Provisioned Concurrency disabled shows the latency added by the creation of new execution environments and the (slow) initialization in my function code. **Available Now**Provisioned Concurrency can be configured using the console, the AWS Command Line Interface (AWS CLI), or AWS SDKs for new or existing Lambda functions, and is available today in the following AWS Regions: in US East (Ohio), US East (N." "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,improveColdstartByIncreasingMemory,\cite{improveColdstartByIncreasingMemory},Architecting a Serverless web application in AWS,,,True,False,,2016.0,,,,,Architecting a Serverless web application in AWS,Build your first Serverless Web Application,https://aws.amazon.com/serverless/build-a-web-app/,You can build a serverless web application by using several AWS services together. Each service is fully managed and does not require you to provision or "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,optimisingServerlessForBBC,\cite{optimisingServerlessForBBC},Optimising serverless for BBC Online,,,True,False,,2021.0,,,,,Optimising serverless for BBC Online,On Demand: How BBC Online Fine-Tuned Serverless to Cut Costs,https://www.srvrlss.io/blog/bbc-online-serverless/,Part 3: Optimising serverless for BBC Online. Particularly the third post of the series caught our attention as it demonstrates how the BBC "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,fuerst2021faascache,\cite{fuerst2021faascache},FaasCache: keeping serverless computing alive with greedy-dual caching,,,True,False,"Fuerst, Alexander and Sharma, Prateek",2021.0,,,,,FaasCache: keeping serverless computing alive with greedy-dual caching,[PDF] FaasCache: Keeping Serverless Computing Alive with Greedy-Dual ...,https://afuerst.github.io/assets/FaasCache.pdf,"Keep-alive policies must keep functions alive based on their resource and usage characteristics, which is challenging due to the diversity in FaaS workloads." "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,pan2022retention,\cite{pan2022retention},Retention-aware container caching for serverless edge computing,,,True,False,"Pan, Li and Wang, Lin and Chen, Shutong and Liu, Fangming",2022.0,,,,,Retention-aware container caching for serverless edge computing,Retention-Aware Container Caching for Serverless Edge ...,https://ieeexplore.ieee.org/iel7/9796607/9796652/09796705.pdf,"by L Pan · 2022 · Cited by 91 — In this paper, we study the retention-aware container caching problem in serverless edge computing. We leverage the distributed and heterogeneous nature of edge ...See more" "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,yu2024rainbowcake,\cite{yu2024rainbowcake},RainbowCake: Mitigating Cold-starts in Serverless with Layer-wise Container Caching and Sharing,,,True,False,"Yu, Hanfei and Basu Roy, Rohan and Fontenot, Christian and Tiwari, Devesh and Li, Jian and Zhang, Hong and Wang, Hao and Park, Seung-Jong",2024.0,,,,,RainbowCake: Mitigating Cold-starts in Serverless with Layer-wise Container Caching and Sharing,RainbowCake: Mitigating Cold-starts in Serverless with Layer-wise ...,https://dl.acm.org/doi/10.1145/3617232.3624871,"This paper proposes RainbowCake, a layer-wise container pre-warming and keep-alive technique that effectively mitigates cold-starts with sharing awareness at" "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,lee2021mitigating,\cite{lee2021mitigating},Mitigating cold start problem in serverless computing with function fusion,,,True,False,"Lee, Seungjun and Yoon, Daegun and Yeo, Sangho and Oh, Sangyoon",2021.0,,,,Sensors,Mitigating cold start problem in serverless computing with function fusion,Mitigating Cold Start Problem in Serverless Computing with Function ...,https://www.mdpi.com/1424-8220/21/24/8416,This study presents an approach to mitigate the cold start latency of a workflow using function fusion while considering a parallel run. "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,kalia2021mono2micro,\cite{kalia2021mono2micro},"Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices",http://arxiv.org/abs/2107.09698v2,"In migrating production workloads to cloud, enterprises often face the daunting task of evolving monolithic applications toward a microservice architecture. At IBM, we developed a tool called Mono2Micro to assist with this challenging task. Mono2Micro performs spatio-temporal decomposition, leveraging well-defined business use cases and runtime call relations to create functionally cohesive partitioning of application classes. Our preliminary evaluation of Mono2Micro showed promising results. How well does Mono2Micro perform against other decomposition techniques, and how do practitioners perceive the tool? This paper describes the technical foundations of Mono2Micro and presents results to answer these two questions. To answer the first question, we evaluated Mono2Micro against four existing techniques on a set of open-source and proprietary Java applications and using different metrics to assess the quality of decomposition and tool's efficiency. Our results show that Mono2Micro significantly outperforms state-of-the-art baselines in specific metrics well-defined for the problem domain. To answer the second question, we conducted a survey of twenty-one practitioners in various industry roles who have used Mono2Micro. This study highlights several benefits of the tool, interesting practitioner perceptions, and scope for further improvements. Overall, these results show that Mono2Micro can provide a valuable aid to practitioners in creating functionally cohesive and explainable microservice decompositions.",True,True,"Kalia, Anup K and Xiao, Jin and Krishna, Rahul and Sinha, Saurabh and Vukovic, Maja and Banerjee, Debasish",2021.0,,,,,"Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices",(PDF) Mono2Micro: a practical and effective tool for decomposing ...,https://www.researchgate.net/publication/354057927_Mono2Micro_a_practical_and_effective_tool_for_decomposing_monolithic_Java_applications_to_microservices,Mono2Micro consists of a set of tools that collect static and runtime information from a monolithic application and process the information "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,nitin2022cargo,\cite{nitin2022cargo},"CARGO: AI-Guided Dependency Analysis for Migrating Monolithic Applications to Microservices Architecture",http://arxiv.org/abs/2207.11784v2,"Microservices Architecture (MSA) has become a de-facto standard for designing cloud-native enterprise applications due to its efficient infrastructure setup, service availability, elastic scalability, dependability, and better security. Existing (monolithic) systems must be decomposed into microservices to harness these characteristics. Since manual decomposition of large scale applications can be laborious and error-prone, AI-based systems to detect decomposition strategies are gaining popularity. However, the usefulness of these approaches is limited by the expressiveness of the program representation and their inability to model the application's dependency on critical external resources such as databases. Consequently, partitioning recommendations offered by current tools result in architectures that result in (a) distributed monoliths, and/or (b) force the use of (often criticized) distributed transactions. This work attempts to overcome these challenges by introducing CARGO({short for [C]ontext-sensitive l[A]bel p[R]opa[G]ati[O]n})-a novel un-/semi-supervised partition refinement technique that uses a context- and flow-sensitive system dependency graph of the monolithic application to refine and thereby enrich the partitioning quality of the current state-of-the-art algorithms. CARGO was used to augment four state-of-the-art microservice partitioning techniques that were applied on five Java EE applications (including one industrial scale proprietary project). Experiments demostrate that CARGO can improve the partition quality of all modern microservice partitioning techniques. Further, CARGO substantially reduces distributed transactions and a real-world performance evaluation of a benchmark application (deployed under varying loads) shows that CARGO also lowers the overall the latency of the deployed microservice application by 11% and increases throughput by 120% on average.",True,True,"Nitin, Vikram and Asthana, Shubhi and Ray, Baishakhi and Krishna, Rahul",2022.0,,,,,"CARGO: AI-Guided Dependency Analysis for Migrating Monolithic Applications to Microservices Architecture",CARGO: AI-Guided Dependency Analysis for Migrating ...,https://arxiv.org/abs/2207.11784,"[2207.11784] CARGO: AI-Guided Dependency Analysis for Migrating Monolithic Applications to Microservices Architecture Title:CARGO: AI-Guided Dependency Analysis for Migrating Monolithic Applications to Microservices Architecture View a PDF of the paper titled CARGO: AI-Guided Dependency Analysis for Migrating Monolithic Applications to Microservices Architecture, by Vikram Nitin and Shubhi Asthana and Baishakhi Ray and Rahul Krishna View a PDF of the paper titled CARGO: AI-Guided Dependency Analysis for Migrating Monolithic Applications to Microservices Architecture, by Vikram Nitin and Shubhi Asthana and Baishakhi Ray and Rahul Krishna - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle " "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,abgaz2023decomposition,\cite{abgaz2023decomposition},Decomposition of monolith applications into microservices architectures: A systematic review,,,True,False,"Abgaz, Yalemisew and McCarren, Andrew and Elger, Peter and Solan, David and Lapuz, Neil and Bivol, Marin and Jackson, Glenn and Yilmaz, Murat and Buckley, Jim and Clarke, Paul",2023.0,,,,IEEE Transactions on Software Engineering,Decomposition of monolith applications into microservices architectures: A systematic review,(PDF) Decomposition of Monolith Applications Into Microservices ...,https://www.researchgate.net/publication/371821973_Decomposition_of_Monolith_Applications_Into_Microservices_Architectures_A_Systematic_Review,This paper rigorously examines 35 research papers selected from well-known databases using a Systematic Literature Review (SLR) protocol and snowballing method. "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,liu2023faaslight,\cite{liu2023faaslight},"FaaSLight: General Application-Level Cold-Start Latency Optimization for Function-as-a-Service in Serverless Computing",http://arxiv.org/abs/2207.08175v2,"Serverless computing is a popular cloud computing paradigm that frees developers from server management. Function-as-a-Service (FaaS) is the most popular implementation of serverless computing, representing applications as event-driven and stateless functions. However, existing studies report that functions of FaaS applications severely suffer from cold-start latency. In this paper, we propose an approach namely FaaSLight to accelerating the cold start for FaaS applications through application-level optimization. We first conduct a measurement study to investigate the possible root cause of the cold start problem of FaaS. The result shows that application code loading latency is a significant overhead. Therefore, loading only indispensable code from FaaS applications can be an adequate solution. Based on this insight, we identify code related to application functionalities by constructing the function-level call graph, and separate other code (i.e., optional code) from FaaS applications. The separated optional code can be loaded on demand to avoid the inaccurate identification of indispensable code causing application failure. In particular, a key principle guiding the design of FaaSLight is inherently general, i.e., platform- and language-agnostic. The evaluation results on real-world FaaS applications show that FaaSLight can significantly reduce the code loading latency (up to 78.95%, 28.78% on average), thereby reducing the cold-start latency. As a result, the total response latency of functions can be decreased by up to 42.05% (19.21% on average). Compared with the state-of-the-art, FaaSLight achieves a 21.25X improvement in reducing the average total response latency.",True,True,"Liu, Xuanzhe and Wen, Jinfeng and Chen, Zhenpeng and Li, Ding and Chen, Junkai and Liu, Yi and Wang, Haoyu and Jin, Xin",2023.0,,,,ACM Transactions on Software Engineering and Methodology,"FaaSLight: General Application-Level Cold-Start Latency Optimization for Function-as-a-Service in Serverless Computing",General Application-Level Cold-Start Latency Optimization for ...,https://conf.researchr.org/details/ase-2023/ase-2023-journal-first-papers/25/FaaSLight-General-Application-Level-Cold-Start-Latency-Optimization-for-Function-as-,FaaSLight: General Application-Level Cold-Start Latency Optimization for Function-as-a-Service in Serverless Computing · Program Display Configuration · Program "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,frostig2018compiling,\cite{frostig2018compiling},Compiling machine learning programs via high-level tracing,,,True,False,"Frostig, Roy and Johnson, Matthew James and Leary, Chris",2018.0,,,,Systems for Machine Learning,Compiling machine learning programs via high-level tracing,[PDF] Compiling machine learning programs via high-level tracing,https://cs.stanford.edu/~rfrostig/pubs/jax-mlsys2018.pdf,"JAX is a tracing JIT compiler that generates high-performance code from Python and Numpy ML programs, using high-level tracing and XLA for optimization." "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,graalvm,\cite{graalvm},The GraalVM native image,,,True,False,GraalVM,2003.0,,,,,The GraalVM native image,GraalVM Native Image,https://www.graalvm.org/22.3/reference-manual/native-image/,"Native Image compiles Java code ahead-of-time to a native executable, including only the code needed at runtime, created by the native-image tool." "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,proguard,\cite{proguard},The industry-leading Java optimizer for Android apps,,,True,False,ProGuard,2002.0,,,,,The industry-leading Java optimizer for Android apps,"ProGuard Reviews 2025: Details, Pricing, & Features - G2",https://www.g2.com/products/proguard/reviews,ProGuard Reviews & Product Details. The industry-leading Java optimizer for Android apps. "Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization",2504.19283v1,r8_android,\cite{r8_android},R8 Compiler for Android applications,,,True,False,R8,2019.0,,,,,R8 Compiler for Android applications,What is R8 and how we enabled it - Stefan M. - Medium,https://stefma.medium.com/what-is-r8-and-how-we-enabled-it-4f5764a7ff9c,"R8 is a compiler that converts Java bytecode into “optimized” dex code. When you want to run your Java (or Kotlin) code on an Android device, the build tool" "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,marino2023dynamic,\cite{marino2023dynamic},Dynamic Optimization of Provider-Based Scheduling for HPC Workloads,,,True,False,"Marino, Jacopo and Risso, Fulvio and Bighi, Mauro",2023.0,,,,,Dynamic Optimization of Provider-Based Scheduling for HPC Workloads,Dynamic Optimization of Provider-Based Scheduling for HPC ...,https://www.researchgate.net/publication/374613581_Dynamic_Optimization_of_Provider-Based_Scheduling_for_HPC_Workloads,"For workloads comprising multiple applications, a speed-up of up to 5x in the total execution time is noted. Moreover, the average GPU utilisation and average" "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,verreydt2019leveraging,\cite{verreydt2019leveraging},Leveraging Kubernetes for adaptive and cost-efficient resource management,,,True,False,"Verreydt, Stef and Beni, Emad Heydari and Truyen, Eddy and Lagaisse, Bert and Joosen, Wouter",2019.0,,,,,Leveraging Kubernetes for adaptive and cost-efficient resource management,[PDF] Leveraging Kubernetes for adaptive and cost-efficient resource ...,https://heydari.be/papers/WoC-stef.pdf,"The goal of this paper is to research how applications can be enhanced with adaptive performance management by re- lying on the capabilities of Kubernetes, a" "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,gao2020hierarchical,\cite{gao2020hierarchical},Hierarchical multi-agent optimization for resource allocation in cloud computing,,,True,False,"Gao, Xiangqiang and Liu, Rongke and Kaushik, Aryan",2020.0,,,,IEEE Transactions on Parallel and Distributed Systems,Hierarchical multi-agent optimization for resource allocation in cloud computing,Hierarchical Multi-Agent Optimization for Resource Allocation in ...,https://www.scienceopen.com/document?vid=cd34411e-d669-4ad6-805d-83132f7f6f30,"Hierarchical Multi-Agent Optimization for Resource Allocation in Cloud Computing. Author(s): Xiangqiang Gao , Rongke Liu , Aryan Kaushik." "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,chhabra2021dynamic,\cite{chhabra2021dynamic},"Dynamic Resource Allocation Method for Load Balance Scheduling over Cloud Data Center Networks",http://arxiv.org/abs/2211.02352v1,"The cloud datacenter has numerous hosts as well as application requests where resources are dynamic. The demands placed on the resource allocation are diverse. These factors could lead to load imbalances, which affect scheduling efficiency and resource utilization. A scheduling method called Dynamic Resource Allocation for Load Balancing (DRALB) is proposed. The proposed solution constitutes two steps: First, the load manager analyzes the resource requirements such as CPU, Memory, Energy and Bandwidth usage and allocates an appropriate number of VMs for each application. Second, the resource information is collected and updated where resources are sorted into four queues according to the loads of resources i.e. CPU intensive, Memory intensive, Energy intensive and Bandwidth intensive. We demonstarate that SLA-aware scheduling not only facilitates the cloud consumers by resources availability and improves throughput, response time etc. but also maximizes the cloud profits with less resource utilization and SLA (Service Level Agreement) violation penalties. This method is based on diversity of clients applications and searching the optimal resources for the particular deployment. Experiments were carried out based on following parameters i.e. average response time; resource utilization, SLA violation rate and load balancing. The experimental results demonstrate that this method can reduce the wastage of resources and reduces the traffic upto 44.89 and 58.49 in the network.",True,True,"Chhabra, Sakshi and Singh, Ashutosh Kumar",2021.0,,,,Journal of Web Engineering,"Dynamic Resource Allocation Method for Load Balance Scheduling over Cloud Data Center Networks",Dynamic Resource Allocation Method for Load Balance ...,https://www.researchgate.net/publication/365181300_Dynamic_Resource_Allocation_Method_for_Load_Balance_Scheduling_over_Cloud_Data_Center_Networks,A scheduling method called Dynamic Resource Allocation for Load Balancing (DRALB) is proposed. The proposed solution constitutes two steps: "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,dong2023agent,\cite{dong2023agent},Agent-based cloud simulation model for resource management,,,True,False,"Dong, Dapeng",2023.0,,,,Journal of Cloud Computing,Agent-based cloud simulation model for resource management,[TeX] Agent-Based Cloud Simulation Model for Resource Management,https://www.techrxiv.org/users/689358/articles/681149/download_latex,"This paper presents an agent-based cloud simulation model for resource management. The focus is on how service placement strategies, service migration, and" "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,cho2020cost,\cite{cho2020cost},A cost estimation model for cloud services and applying to PC laboratory platforms,,,True,False,"Cho, KyungWoon and Bahn, Hyokyung",2020.0,,,,Processes,A cost estimation model for cloud services and applying to PC laboratory platforms,A Cost Estimation Model for Cloud Services and Applying to PC ...,https://www.researchgate.net/publication/338464515_A_Cost_Estimation_Model_for_Cloud_Services_and_Applying_to_PC_Laboratory_Platforms,"In this article, we present an instant cost estimation model for estimating the cost of public cloud resources. Specifically, our model" "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,xu2018cost,\cite{xu2018cost},Cost-effective cloud server provisioning for predictable performance of big data analytics,,,True,False,"Xu, Fei and Zheng, Haoyue and Jiang, Huan and Shao, Wujie and Liu, Haikun and Zhou, Zhi",2018.0,,,,IEEE Transactions on Parallel and Distributed Systems,Cost-effective cloud server provisioning for predictable performance of big data analytics,Cost Effective Cloud Server Provisioning for Predictable ...,https://www.researchgate.net/publication/341052800_Cost_Effective_Cloud_Server_Provisioning_for_Predictable_Performance_of_Big_Data_Analytics,Cost Effective Cloud Server Provisioning for Predictable Performance of Big Data Analytics. April 2020; International Journal for Research in Applied Science "Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs",2504.11007v1,de2023cost,\cite{de2023cost},Cost-Profiling Microservice Applications Using an APM Stack,,,True,False,"de Vries, Sjouke and Blaauw, Frank and Andrikopoulos, Vasilios",2023.0,,,,Future Internet,Cost-Profiling Microservice Applications Using an APM Stack,Cost-Profiling Microservice Applications Using an APM Stack,https://www.researchgate.net/publication/367115689_Cost-Profiling_Microservice_Applications_Using_an_APM_Stack,"In response to that, in this work, we present a cost-profiling solution aimed at Kubernetes-based microservice applications, building on a" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,NVIDIA_blackwell,\cite{NVIDIA_blackwell},NVIDIA blackwell architecture,,,True,False,NVIDIA,,,,,,NVIDIA blackwell architecture,NVIDIA Blackwell Architecture Technical Overview,https://resources.nvidia.com/en-us-blackwell-architecture,"NVIDIA's Blackwell GPU architecture revolutionizes AI with unparalleled performance, scalability and efficiency. Anchored by the Grace Blackwell GB200" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,jouppi2023tpu,\cite{jouppi2023tpu},"TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings",http://arxiv.org/abs/2304.01433v3,"In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are <5% of system cost and <3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.",True,True,"Jouppi, Norm and Kurian, George and Li, Sheng and Ma, Peter and Nagarajan, Rahul and Nai, Lifeng and Patil, Nishant and Subramanian, Suvinay and Swing, Andy and Towles, Brian and others",2023.0,,,,,"TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings",TPU v4: An Optically Reconfigurable Supercomputer for Machine ...,https://arxiv.org/abs/2304.01433,"The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,NVProf,\cite{NVProf},NVProf,,,True,False,NVIDIA,,,,,,NVProf,Profiling code with nvprof - MIT Satori,https://mit-satori.github.io/tutorial-examples/nvprof-profiling/index.html,The nvprof tool from NVidia can be used to create detailed profiles of where codes are spending time and what resources they are using. "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,CUPTI,\cite{CUPTI},CUPTI,,,True,False,NVIDIA,,,,,,CUPTI,NVIDIA CUDA Profiling Tools Interface (CUPTI) - CUDA Toolkit,https://developer.nvidia.com/cupti,The NVIDIA CUDA Profiling Tools Interface (CUPTI) is a library that enables the creation of profiling and tracing tools that target CUDA applications. "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,Nsight,\cite{Nsight},Nsight,,,True,False,NVIDIA,,,,,,Nsight,Nsight,https://www.nsight.com/,"Nsight is a telecom company that's most powerful asset is our people. We are dedicated to hiring only the best, the brightest and the most committed" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,pytorch-kineto,\cite{pytorch-kineto},PyTorch Kineto,,,True,False,Kineto,,,,,,PyTorch Kineto,Automated trace collection and analysis - PyTorch,https://pytorch.org/blog/automated-trace-collection/,Kineto is the subsystem within Profiler that interfaces with CUPTI. The PyTorch Profiler leverages the Kineto library to collect GPU traces. "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,vaswani2017attention,\cite{vaswani2017attention},Attention Is All You Need,http://arxiv.org/abs/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",True,True,"Vaswani, A",2017.0,,,,Advances in Neural Information Processing Systems,Attention Is All You Need,Attention Is All You Need,http://arxiv.org/pdf/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,radford2019language,\cite{radford2019language},Language models are unsupervised multitask learners,,,True,False,"Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others",2019.0,,,,OpenAI blog,Language models are unsupervised multitask learners,[PDF] Language Models are Unsupervised Multitask Learners,https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe,It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,brown2020language,\cite{brown2020language},Language Models are Few-shot Multilingual Learners,http://arxiv.org/abs/2109.07684v1,"General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones. Finally, we find the in-context few-shot cross-lingual prediction results of language models are significantly better than random prediction, and they are competitive compared to the existing state-of-the-art cross-lingual models.",True,True,"Brown, Tom B",2020.0,,,,arXiv preprint arXiv:2005.14165,Language Models are Few-shot Multilingual Learners,[PDF] Language Models are Few-shot Multilingual Learners,https://www.semanticscholar.org/paper/Language-Models-are-Few-shot-Multilingual-Learners-Winata-Madotto/42fc019b2668c9d9d984154d4c57f6c6d5a91619,"It is shown that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones," "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,chowdhery2023palm,\cite{chowdhery2023palm},PaLM: Scaling Language Modeling with Pathways,http://arxiv.org/abs/2204.02311v5,"Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.",True,True,"Chowdhery, Aakanksha and Narang, Sharan and Devlin, Jacob and Bosma, Maarten and Mishra, Gaurav and Roberts, Adam and Barham, Paul and Chung, Hyung Won and Sutton, Charles and Gehrmann, Sebastian and others",2023.0,,,,Journal of Machine Learning Research,PaLM: Scaling Language Modeling with Pathways,PaLM: Scaling Language Modeling with Pathways,http://arxiv.org/pdf/2204.02311v5,"Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,narayanan2021efficient,\cite{narayanan2021efficient},Efficient large-scale language model training on gpu clusters using megatron-lm,,,True,False,"Narayanan, Deepak and Shoeybi, Mohammad and Casper, Jared and LeGresley, Patrick and Patwary, Mostofa and Korthikanti, Vijay and Vainbrand, Dmitri and Kashinkunti, Prethvi and Bernauer, Julie and Catanzaro, Bryan and others",2021.0,,,,,Efficient large-scale language model training on gpu clusters using megatron-lm,[PDF] Efficient Large-Scale Language Model Training on GPU Clusters ...,https://people.eecs.berkeley.edu/~matei/papers/2021/sc_megatron_lm.pdf,"This paper proposes a method using tensor, pipeline, and data parallelism, with a novel interleaved pipelining schedule, to improve throughput by 10+% for" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,shoeybi2019megatron,\cite{shoeybi2019megatron},Megatron-lm: Training multi-billion parameter language models using model parallelism,,,True,False,"Shoeybi, Mohammad and Patwary, Mostofa and Puri, Raul and LeGresley, Patrick and Casper, Jared and Catanzaro, Bryan",2019.0,,,,arXiv preprint arXiv:1909.08053,Megatron-lm: Training multi-billion parameter language models using model parallelism,[PDF] Megatron-LM: Training Multi-Billion Parameter Language Models ...,https://parsa.epfl.ch/course-info/cs723/papers/Megatron.pdf,"Megatron-LM uses intra-layer model parallelism to train large transformer models, enabling training of models with billions of parameters, up to 8.3 billion on" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,smith2022using,\cite{smith2022using},"Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model",http://arxiv.org/abs/2201.11990v3,"Pretrained general-purpose language models can achieve state-of-the-art accuracies in various natural language processing domains by adapting to downstream tasks via zero-shot, few-shot and fine-tuning techniques. Because of their success, the size of these models has increased rapidly, requiring high-performance hardware, software, and algorithmic techniques to enable training such large models. As the result of a joint effort between Microsoft and NVIDIA, we present details on the training of the largest monolithic transformer based language model, Megatron-Turing NLG 530B (MT-NLG), with 530 billion parameters. In this paper, we first focus on the infrastructure as well as the 3D parallelism methodology used to train this model using DeepSpeed and Megatron. Next, we detail the training process, the design of our training corpus, and our data curation techniques, which we believe is a key ingredient to the success of the model. Finally, we discuss various evaluation results, as well as other interesting observations and new properties exhibited by MT-NLG. We demonstrate that MT-NLG achieves superior zero-, one-, and few-shot learning accuracies on several NLP benchmarks and establishes new state-of-the-art results. We believe that our contributions will help further the development of large-scale training infrastructures, large-scale language models, and natural language generations.",True,True,"Smith, Shaden and Patwary, Mostofa and Norick, Brandon and LeGresley, Patrick and Rajbhandari, Samyam and Casper, Jared and Liu, Zhun and Prabhumoye, Shrimai and Zerveas, George and Korthikanti, Vijay and others",2022.0,,,,arXiv preprint arXiv:2201.11990,"Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model",[PDF] Using DeepSpeed and Megatron to Train Megatron-Turing NLG ...,https://arxiv.org/pdf/2201.11990,Missing: 04/08/2025 "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,xu2021gspmd,\cite{xu2021gspmd},GSPMD: General and Scalable Parallelization for ML Computation Graphs,http://arxiv.org/abs/2105.04663v2,"We present GSPMD, an automatic, compiler-based parallelization system for common machine learning computations. It allows users to write programs in the same way as for a single device, then give hints through a few annotations on how to distribute tensors, based on which GSPMD will parallelize the computation. Its representation of partitioning is simple yet general, allowing it to express different or mixed paradigms of parallelism on a wide variety of models. GSPMD infers the partitioning for every operator based on limited user annotations, making it convenient to scale existing single-device programs. It solves several technical challenges for production usage, allowing GSPMD to achieve 50% to 62% compute utilization on up to 2048 Cloud TPUv3 cores for models with up to one trillion parameters.",True,True,"Xu, Yuanzhong and Lee, HyoukJoong and Chen, Dehao and Hechtman, Blake and Huang, Yanping and Joshi, Rahul and Krikun, Maxim and Lepikhin, Dmitry and Ly, Andy and Maggioni, Marcello and others",2021.0,,,,arXiv preprint arXiv:2105.04663,GSPMD: General and Scalable Parallelization for ML Computation Graphs,GSPMD: General and Scalable Parallelization for ML Computation Graphs,http://arxiv.org/pdf/2105.04663v2,"We present GSPMD, an automatic, compiler-based parallelization system for common machine learning computations. It allows users to write programs in the same way as for a single device, then give hints through a few annotations on how to distribute tensors, based on which GSPMD will parallelize the computation. Its representation of partitioning is simple yet general, allowing it to express different or mixed paradigms of parallelism on a wide variety of models. GSPMD infers the partitioning for every operator based on limited user annotations, making it convenient to scale existing single-device programs. It solves several technical challenges for production usage, allowing GSPMD to achieve 50% to 62% compute utilization on up to 2048 Cloud TPUv3 cores for models with up to one trillion parameters." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,sabne2020xla,\cite{sabne2020xla},Xla: Compiling machine learning for peak performance,,,True,False,"Sabne, Amit",2020.0,,,,Google Res,Xla: Compiling machine learning for peak performance,XLA : Compiling Machine Learning for Peak Performance,https://research.google/pubs/xla-compiling-machine-learning-for-peak-performance/,XLA (accelerated linear algebra) is a compiler-based linear algebra execution engine. It is the backend that powers machine learning frameworks. "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,zheng2022alpa,\cite{zheng2022alpa},Alpa: Automating inter-and $\{$Intra-Operator$\}$ parallelism for distributed deep learning,,,True,False,"Zheng, Lianmin and Li, Zhuohan and Zhang, Hao and Zhuang, Yonghao and Chen, Zhifeng and Huang, Yanping and Wang, Yida and Xu, Yuanzhong and Zhuo, Danyang and Xing, Eric P and others",2022.0,,,,,Alpa: Automating inter-and $\{$Intra-Operator$\}$ parallelism for distributed deep learning,(PDF) Alpa: Automating Inter- and Intra-Operator ...,https://www.researchgate.net/publication/358233150_Alpa_Automating_Inter-_and_Intra-Operator_Parallelism_for_Distributed_Deep_Learning,"Alpa automates model-parallel training of large deep learning (DL) models by generating execution plans that unify data, operator, and pipeline parallelism." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,miao2022galvatron,\cite{miao2022galvatron},"Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism",http://arxiv.org/abs/2211.13878v1,"Transformer models have achieved state-of-the-art performance on various domains of applications and gradually becomes the foundations of the advanced large deep learning (DL) models. However, how to train these models over multiple GPUs efficiently is still challenging due to a large number of parallelism choices. Existing DL systems either rely on manual efforts to make distributed training plans or apply parallelism combinations within a very limited search space. In this approach, we propose Galvatron, a new system framework that incorporates multiple popular parallelism dimensions and automatically finds the most efficient hybrid parallelism strategy. To better explore such a rarely huge search space, we 1) involve a decision tree to make decomposition and pruning based on some reasonable intuitions, and then 2) design a dynamic programming search algorithm to generate the optimal plan. Evaluations on four representative Transformer workloads show that Galvatron could perform automatically distributed training with different GPU memory budgets. Among all evluated scenarios, Galvatron always achieves superior system throughput compared to previous work with limited parallelism.",True,True,"Miao, Xupeng and Wang, Yujie and Jiang, Youhe and Shi, Chunan and Nie, Xiaonan and Zhang, Hailin and Cui, Bin",2022.0,,,,arXiv preprint arXiv:2211.13878,"Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism",Galvatron: Efficient Transformer Training over Multiple GPUs ... - arXiv,https://arxiv.org/abs/2211.13878,"We propose Galvatron, a new system framework that incorporates multiple popular parallelism dimensions and automatically finds the most efficient hybrid" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,li2021sequence,\cite{li2021sequence},Sequence Parallelism: Long Sequence Training from System Perspective,http://arxiv.org/abs/2105.13120v3,"Transformer achieves promising results on various tasks. However, self-attention suffers from quadratic memory requirements with respect to the sequence length. Existing work focuses on reducing time and space complexity from an algorithm perspective. In this work, we propose sequence parallelism, a memory-efficient parallelism method to help us break input sequence length limitation and train with longer sequences on GPUs efficiently. Our approach is compatible with most existing parallelisms (e.g. data parallelism, pipeline parallelism and tensor parallelism), which means our sequence parallelism makes 4D parallelism possible. More importantly, we no longer require a single device to hold the whole sequence. That is, with sparse attention, our sequence parallelism enables us to train transformer with infinite long sequence. Specifically, we split the input sequence into multiple chunks and feed each chunk into its corresponding device (i.e. GPU). To compute the attention output, we integrated ring-style communication with self-attention calculation and proposed Ring Self-Attention (RSA). Experiments show that sequence parallelism performs well when scaling with batch size and sequence length. Compared with tensor parallelism, our approach achieved $13.7\times$ and $3.0\times$ maximum batch size and sequence length respectively when scaling up to 64 NVIDIA P100 GPUs. With sparse attention, sequence can handle sequence with over 114K tokens, which is over $27\times$ longer than existing sparse attention works holding the whole sequence on a single device.",True,True,"Li, Shenggui and Xue, Fuzhao and Baranwal, Chaitanya and Li, Yongbin and You, Yang",2021.0,,,,arXiv preprint arXiv:2105.13120,Sequence Parallelism: Long Sequence Training from System Perspective,Sequence Parallelism: Long Sequence Training from System Perspective,http://arxiv.org/pdf/2105.13120v3,"Transformer achieves promising results on various tasks. However, self-attention suffers from quadratic memory requirements with respect to the sequence length. Existing work focuses on reducing time and space complexity from an algorithm perspective. In this work, we propose sequence parallelism, a memory-efficient parallelism method to help us break input sequence length limitation and train with longer sequences on GPUs efficiently. Our approach is compatible with most existing parallelisms (e.g. data parallelism, pipeline parallelism and tensor parallelism), which means our sequence parallelism makes 4D parallelism possible. More importantly, we no longer require a single device to hold the whole sequence. That is, with sparse attention, our sequence parallelism enables us to train transformer with infinite long sequence. Specifically, we split the input sequence into multiple chunks and feed each chunk into its corresponding device (i.e. GPU). To compute the attention output, we integrated ring-style communication with self-attention calculation and proposed Ring Self-Attention (RSA). Experiments show that sequence parallelism performs well when scaling with batch size and sequence length. Compared with tensor parallelism, our approach achieved $13.7\times$ and $3.0\times$ maximum batch size and sequence length respectively when scaling up to 64 NVIDIA P100 GPUs. With sparse attention, sequence can handle sequence with over 114K tokens, which is over $27\times$ longer than existing sparse attention works holding the whole sequence on a single device." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,jacobs2023deepspeed,\cite{jacobs2023deepspeed},"DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models",http://arxiv.org/abs/2309.14509v2,"Computation in a typical Transformer-based large language model (LLM) can be characterized by batch size, hidden dimension, number of layers, and sequence length. Until now, system works for accelerating LLM training have focused on the first three dimensions: data parallelism for batch size, tensor parallelism for hidden size and pipeline parallelism for model depth or layers. These widely studied forms of parallelism are not targeted or optimized for long sequence Transformer models. Given practical application needs for long sequence LLM, renewed attentions are being drawn to sequence parallelism. However, existing works in sequence parallelism are constrained by memory-communication inefficiency, limiting their scalability to long sequence large models. In this work, we introduce DeepSpeed-Ulysses, a novel, portable and effective methodology for enabling highly efficient and scalable LLM training with extremely long sequence length. DeepSpeed-Ulysses at its core partitions input data along the sequence dimension and employs an efficient all-to-all collective communication for attention computation. Theoretical communication analysis shows that whereas other methods incur communication overhead as sequence length increases, DeepSpeed-Ulysses maintains constant communication volume when sequence length and compute devices are increased proportionally. Furthermore, experimental evaluations show that DeepSpeed-Ulysses trains 2.5x faster with 4x longer sequence length than the existing method SOTA baseline.",True,True,"Jacobs, Sam Ade and Tanaka, Masahiro and Zhang, Chengming and Zhang, Minjia and Song, Shuaiwen Leon and Rajbhandari, Samyam and He, Yuxiong",2023.0,,,,arXiv preprint arXiv:2309.14509,"DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models",DeepSpeed Ulysses: Optimizing Long Sequence Transformers,https://www.emergentmind.com/articles/2309.14509,DeepSpeed Ulysses enables Transformer training with up to 4x longer sequences and 2.5x higher throughput using efficient sequence "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,liu2023ring,\cite{liu2023ring},Ring Attention with Blockwise Transformers for Near-Infinite Context,http://arxiv.org/abs/2310.01889v4,"Transformers have emerged as the architecture of choice for many state-of-the-art AI models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands imposed by Transformers limit their ability to handle long sequences, thereby posing challenges in utilizing videos, actions, and other long-form sequences and modalities in complex environments. We present a novel approach, Ring Attention with Blockwise Transformers (Ring Attention), which leverages blockwise computation of self-attention and feedforward to distribute long sequences across multiple devices while fully overlapping the communication of key-value blocks with the computation of blockwise attention. Our approach enables training and inference of sequences that are up to device count times longer than those achievable by prior memory-efficient Transformers, without resorting to approximations or incurring additional communication and computation overheads. Extensive experiments on language modeling and reinforcement learning tasks demonstrate the effectiveness of our approach in allowing millions of tokens context size and improving performance.",True,True,"Liu, Hao and Zaharia, Matei and Abbeel, Pieter",2023.0,,,,arXiv preprint arXiv:2310.01889,Ring Attention with Blockwise Transformers for Near-Infinite Context,RingAttention with Blockwise Transformers for Near-Infinite ...,https://openreview.net/forum?id=WsRHpHH4s0,by H Liu · Cited by 268 — Ring Attention not only achieves 64 times longer context size thanks to distributing the sequence across devices but also attains much higher MFU due to ...See more "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,moolchandani2023amped,\cite{moolchandani2023amped},Amped: An analytical model for performance in distributed training of transformers,,,True,False,"Moolchandani, Diksha and Kundu, Joyjit and Ruelens, Frederik and Vrancx, Peter and Evenblij, Timon and Perumkunnil, Manu",2023.0,,,,,Amped: An analytical model for performance in distributed training of transformers,[PDF] AMPeD: An Analytical Model for Performance in Distributed Training ...,https://diksha-moolchandani.github.io/files/papers/ispass_amped.pdf,"In this work, we proposed AMPeD, an analytical model for performance in distributed training of transformers. AMPeD provides the users with multiple tunable" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,isaev2023calculon,\cite{isaev2023calculon},Calculon: a methodology and tool for high-level co-design of systems and large language models,,,True,False,"Isaev, Mikhail and McDonald, Nic and Dennison, Larry and Vuduc, Richard",2023.0,,,,,Calculon: a methodology and tool for high-level co-design of systems and large language models,Calculon - ACM Digital Library,https://dl.acm.org/doi/pdf/10.1145/3581784.3607102,"Calculon: a Methodology and Tool for High-Level Codesign of Systems and Large Language Models. SC '23, November 12–17, 2023, Denver, CO, USA." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,das2018mixed,\cite{das2018mixed},"Mixed Precision Training of Convolutional Neural Networks using Integer Operations",http://arxiv.org/abs/1802.00930v2,"The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular, FP16 accumulating into FP32 Micikevicius et al. (2017). On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on the ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half-precision",True,True,"Das, Dipankar and Mellempudi, Naveen and Mudigere, Dheevatsa and Kalamkar, Dhiraj and Avancha, Sasikanth and Banerjee, Kunal and Sridharan, Srinivas and Vaidyanathan, Karthik and Kaul, Bharat and Georganas, Evangelos and others",2018.0,,,,arXiv preprint arXiv:1802.00930,"Mixed Precision Training of Convolutional Neural Networks using Integer Operations",Mixed Precision Training of Convolutional Neural Networks ...,https://ar5iv.labs.arxiv.org/html/1802.00930,"In this work, we train state-of-the-art visual understanding neural networks on ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,zhu2020daydream,\cite{zhu2020daydream},"Daydream: Accurately Estimating the Efficacy of Optimizations for DNN Training",http://arxiv.org/abs/2006.03318v1,"Modern deep neural network (DNN) training jobs use complex and heterogeneous software/hardware stacks. The efficacy of software-level optimizations can vary significantly when used in different deployment configurations. It is onerous and error-prone for ML practitioners and system developers to implement each optimization separately, and determine which ones will improve performance in their own configurations. Unfortunately, existing profiling tools do not aim to answer predictive questions such as ""How will optimization X affect the performance of my model?"". We address this critical limitation, and proposes a new profiling tool, Daydream, to help programmers efficiently explore the efficacy of DNN optimizations. Daydream models DNN execution with a fine-grained dependency graph based on low-level traces collected by CUPTI, and predicts runtime by simulating execution based on the dependency graph. Daydream maps the low-level traces using DNN domain-specific knowledge, and introduces a set of graph-transformation primitives that can easily model a wide variety of optimizations. We show that Daydream is able to model most mainstream DNN optimization techniques, and accurately predict the efficacy of optimizations that will result in significant performance improvements.",True,True,"Zhu, Hongyu and Phanishayee, Amar and Pekhimenko, Gennady",2020.0,,,,,"Daydream: Accurately Estimating the Efficacy of Optimizations for DNN Training",Accurately Estimating the Efficacy of Optimizations for DNN ...,https://www.usenix.org/system/files/atc20-zhu-hongyu.pdf,"by H Zhu · 2020 · Cited by 51 — In our evaluation, we show that Daydream is able to distinguish effective DNN optimizations from those that will bring limited improvements by" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,zhao2022apollo,\cite{zhao2022apollo},Apollo: Automatic partition-based operator fusion through layer by layer optimization,,,True,False,"Zhao, Jie and Gao, Xiong and Xia, Ruijie and Zhang, Zhaochuang and Chen, Deshi and Chen, Lei and Zhang, Renwei and Geng, Zhen and Cheng, Bin and Jin, Xuefeng",2022.0,,,,Proceedings of Machine Learning and Systems,Apollo: Automatic partition-based operator fusion through layer by layer optimization,Apollo: Automatic Partition-based Operator Fusion through ...,https://proceedings.mlsys.org/paper_files/paper/2022/file/e175e8a86d28d935be4f43719651f86d-Paper.pdf,"by J Zhao · 2022 · Cited by 57 — We present APOLLO, an Automatic Partition-based. Operator fusion framework through Layer by Layer. Optimization, to address the above challenges. It splits a" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,jia2019taso,\cite{jia2019taso},TASO: optimizing deep learning computation with automatic generation of graph substitutions,,,True,False,"Jia, Zhihao and Padon, Oded and Thomas, James and Warszawski, Todd and Zaharia, Matei and Aiken, Alex",2019.0,,,,,TASO: optimizing deep learning computation with automatic generation of graph substitutions,[PDF] TASO: Optimizing Deep Learning Computation with Automatic ...,https://www.wisdom.weizmann.ac.il/~padon/taso-sosp19.pdf,"First, TASO only requires operator definitions and specifications, and automatically generates graph substitutions, reducing manual effort. Second, TASO employs" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,rashidi2020astra,\cite{rashidi2020astra},Astra-sim: Enabling sw/hw co-design exploration for distributed dl training platforms,,,True,False,"Rashidi, Saeed and Sridharan, Srinivas and Srinivasan, Sudarshan and Krishna, Tushar",2020.0,,,,,Astra-sim: Enabling sw/hw co-design exploration for distributed dl training platforms,Enabling HW/SW Co-Design of Distributed Deep Learning ...,https://astra-sim.github.io/assets/tutorials/asplos-2022/1_asplos2022_introduction.pdf,"Rashidi et al.,“ASTRA-SIM: Enabling SW/HW. Co-Design Exploration for Distributed DL. Training Platforms”, ISPASS 2020. Page 60. Distributed Training Stack." "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,won2023astra,\cite{won2023astra},Astra-sim2. 0: Modeling hierarchical networks and disaggregated systems for large-model training at scale,,,True,False,"Won, William and Heo, Taekyung and Rashidi, Saeed and Sridharan, Srinivas and Srinivasan, Sudarshan and Krishna, Tushar",2023.0,,,,,Astra-sim2. 0: Modeling hierarchical networks and disaggregated systems for large-model training at scale,[PDF] ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated ...,https://arxiv.org/pdf/2303.14006,Missing: 04/08/2025 "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,lin2022building,\cite{lin2022building},"Building a Performance Model for Deep Learning Recommendation Model Training on GPUs",http://arxiv.org/abs/2201.07821v2,"We devise a performance model for GPU training of Deep Learning Recommendation Models (DLRM), whose GPU utilization is low compared to other well-optimized CV and NLP models. We show that both the device active time (the sum of kernel runtimes) but also the device idle time are important components of the overall device time. We therefore tackle them separately by (1) flexibly adopting heuristic-based and ML-based kernel performance models for operators that dominate the device active time, and (2) categorizing operator overheads into five types to determine quantitatively their contribution to the device active time. Combining these two parts, we propose a critical-path-based algorithm to predict the per-batch training time of DLRM by traversing its execution graph. We achieve less than 10% geometric mean average error (GMAE) in all kernel performance modeling, and 4.61% and 7.96% geomean errors for GPU active time and overall E2E per-batch training time prediction with overheads from individual workloads, respectively. A slight increase of 2.19% incurred in E2E prediction error with shared overheads across workloads suggests the feasibility of using shared overheads in large-scale prediction. We show that our general performance model not only achieves low prediction error on DLRM, which has highly customized configurations and is dominated by multiple factors but also yields comparable accuracy on other compute-bound ML models targeted by most previous methods. Using this performance model and graph-level data and task dependency analysis, we show our system can provide more general model-system co-design than previous methods.",True,True,"Lin, Zhongyi and Feng, Louis and Ardestani, Ehsan K and Lee, Jaewon and Lundell, John and Kim, Changkyu and Kejariwal, Arun and Owens, John D",2022.0,,,,arXiv preprint arXiv:2201.07821,"Building a Performance Model for Deep Learning Recommendation Model Training on GPUs",Building a Performance Model for Deep Learning ...,https://arxiv.org/abs/2201.07821,"by Z Lin · 2022 · Cited by 25 — We devise a performance model for GPU training of Deep Learning Recommendation Models (DLRM), whose GPU utilization is low compared to other well-optimized CV" "Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training",2504.09307v1,hu2022dpro,\cite{hu2022dpro},dpro: A generic performance diagnosis and optimization toolkit for expediting distributed dnn training,,,True,False,"Hu, Hanpeng and Jiang, Chenyu and Zhong, Yuchen and Peng, Yanghua and Wu, Chuan and Zhu, Yibo and Lin, Haibin and Guo, Chuanxiong",2022.0,,,,Proceedings of Machine Learning and Systems,dpro: A generic performance diagnosis and optimization toolkit for expediting distributed dnn training,dPRO: A Generic Performance Diagnosis and Optimization Toolkit ...,https://proceedings.mlsys.org/paper_files/paper/2022/hash/b422680f3db0986ddd7f8f126baaf0fa-Abstract.html,"dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training To date, there exists no software tool which diagnoses performance issues and helps expedite distributed DNN training, while the training can be run using different deep learning frameworks. This paper proposes dPRO, a toolkit that includes: (1) an efficient profiler that collects runtime traces of distributed DNN training across multiple frameworks, especially fine-grained communication traces, and constructs global data flow graphs including detailed communication operations for accurate replay; (2) an optimizer that effectively identifies performance bottlenecks and explores optimization strategies (from computation, communication, and memory aspects) for training acceleration. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,covington2016deep,\cite{covington2016deep},Deep neural networks for youtube recommendations,,,True,False,"Covington, Paul and Adams, Jay and Sargin, Emre",2016.0,,,,,Deep neural networks for youtube recommendations,Deep Neural Networks for YouTube Recommendations,https://research.google.com/pubs/archive/45530.pdf,by P Covington · Cited by 4487 — In this paper we will focus on the immense im- pact deep learning has recently had on the YouTube video recommendations system. Figure 1 "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,pinsage,\cite{pinsage},Graph convolutional neural networks for web-scale recommender systems,,,True,False,"Ying, Rex and He, Ruining and Chen, Kaifeng and Eksombatchai, Pong and Hamilton, William L and Leskovec, Jure",2018.0,,,,,Graph convolutional neural networks for web-scale recommender systems,Graph Convolutional Neural Networks for Web-Scale Recommender ...,https://dl.acm.org/doi/10.1145/3219819.3219890,"We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,yang2022hrcf,\cite{yang2022hrcf},"HRCF: Enhancing Collaborative Filtering via Hyperbolic Geometric Regularization",http://arxiv.org/abs/2204.08176v2,"In large-scale recommender systems, the user-item networks are generally scale-free or expand exponentially. The latent features (also known as embeddings) used to describe the user and item are determined by how well the embedding space fits the data distribution. Hyperbolic space offers a spacious room to learn embeddings with its negative curvature and metric properties, which can well fit data with tree-like structures. Recently, several hyperbolic approaches have been proposed to learn high-quality representations for the users and items. However, most of them concentrate on developing the hyperbolic similitude by designing appropriate projection operations, whereas many advantageous and exciting geometric properties of hyperbolic space have not been explicitly explored. For example, one of the most notable properties of hyperbolic space is that its capacity space increases exponentially with the radius, which indicates the area far away from the hyperbolic origin is much more embeddable. Regarding the geometric properties of hyperbolic space, we bring up a Hyperbolic Regularization powered Collaborative Filtering(HRCF) and design a geometric-aware hyperbolic regularizer. Specifically, the proposal boosts optimization procedure via the root alignment and origin-aware penalty, which is simple yet impressively effective. Through theoretical analysis, we further show that our proposal is able to tackle the over-smoothing problem caused by hyperbolic aggregation and also brings the models a better discriminative ability. We conduct extensive empirical analysis, comparing our proposal against a large set of baselines on several public benchmarks. The empirical results show that our approach achieves highly competitive performance and surpasses both the leading Euclidean and hyperbolic baselines by considerable margins.",True,True,"Yang, Menglin and Zhou, Min and Liu, Jiahong and Lian, Defu and King, Irwin",2022.0,,,,,"HRCF: Enhancing Collaborative Filtering via Hyperbolic Geometric Regularization","marlin-codes/HRCF: PyTorch Implementation for ""HRCF",https://github.com/marlin-codes/HRCF,"WWW'22 HRCF: Enhancing Collaborative Filtering via Hyperbolic Geometric Regularization [PDF]. Authors: Menglin Yang, Min Zhou, Jiahong Liu, Defu Lian, Irwin" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,lin2024effective,\cite{lin2024effective},Effective Job-market Mobility Prediction with Attentive Heterogeneous Knowledge Learning and Synergy,,,True,False,"Lin, Sida and Zhang, Zhouyi and Chen, Yankai and Ma, Chenhao and Fang, Yixiang and Dai, Shan and Lu, Guangli",2024.0,,,,,Effective Job-market Mobility Prediction with Attentive Heterogeneous Knowledge Learning and Synergy,Effective Job-market Mobility Prediction with Attentive ... - Consensus,https://consensus.app/papers/effective-jobmarket-mobility-prediction-with-attentive-lin-fang/37d6ec851294570fb190488299427191/,Key takeaway: 'AHKLS model effectively predicts job-market mobility by utilizing heterogeneous relational knowledge and time-aware "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,zhang2022knowledge,\cite{zhang2022knowledge},Knowledge-aware neural networks with personalized feature referencing for cold-start recommendation,,,True,False,"Zhang, Xinni and Chen, Yankai and Gao, Cuiyun and Liao, Qing and Zhao, Shenglin and King, Irwin",2022.0,,,,arXiv preprint arXiv:2209.13973,Knowledge-aware neural networks with personalized feature referencing for cold-start recommendation,Knowledge-aware Neural Networks with Personalized Feature ...,https://arxiv.org/abs/2209.13973,Abstract page for arXiv paper 2209.13973: Knowledge-aware Neural Networks with Personalized Feature Referencing for Cold-start Recommendation. "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,luo2025rank,\cite{luo2025rank},Rank Gap Sensitive Deep AUC maximization for CTR prediction,,,True,False,"Luo, Fangyuan and Chen, Yankai and Wu, Jun and Li, Yidong",2025.0,,,,Pattern Recognition,Rank Gap Sensitive Deep AUC maximization for CTR prediction,Rank Gap Sensitive Deep AUC maximization for CTR prediction - ADS,https://ui.adsabs.harvard.edu/abs/2025PatRe.16411496L/abstract,"To this end, we propose Rank Gap Sensitive Deep AUC maximization method for accurate CTR prediction, namely RgsAUC." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,koren2009matrix,\cite{koren2009matrix},Content-boosted Matrix Factorization Techniques for Recommender Systems,http://arxiv.org/abs/1210.5631v2,"Many businesses are using recommender systems for marketing outreach. Recommendation algorithms can be either based on content or driven by collaborative filtering. We study different ways to incorporate content information directly into the matrix factorization approach of collaborative filtering. These content-boosted matrix factorization algorithms not only improve recommendation accuracy, but also provide useful insights about the contents, as well as make recommendations more easily interpretable.",True,True,"Koren, Yehuda and Bell, Robert and Volinsky, Chris",2009.0,,,,Computer,Content-boosted Matrix Factorization Techniques for Recommender Systems,Content-boosted Matrix Factorization Techniques for Recommender ...,https://arxiv.org/abs/1210.5631,"[1210.5631] Content-boosted Matrix Factorization Techniques for Recommender Systems >stat> arXiv:1210.5631 arXiv:1210.5631 (stat) Title:Content-boosted Matrix Factorization Techniques for Recommender Systems View a PDF of the paper titled Content-boosted Matrix Factorization Techniques for Recommender Systems, by Jennifer Nguyen and 1 other authors Cite as:arXiv:1210.5631 [stat.ML] (or arXiv:1210.5631v2 [stat.ML] for this version) View a PDF of the paper titled Content-boosted Matrix Factorization Techniques for Recommender Systems, by Jennifer Nguyen and 1 other authors [x] Bibliographic Explorer Toggle [x] Connected Papers Toggle [x] Litmaps Toggle [x] scite.ai Toggle [x] alphaXiv Toggle [x] Links to Code Toggle [x] DagsHub Toggle [x] GotitPub Toggle [x] Huggingface Toggle [x] Links to Code Toggle [x] ScienceCast Toggle [x] Replicate Toggle [x] Spaces Toggle [x] Spaces Toggle [x] Core recommender toggle " "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,rendle2012bpr,\cite{rendle2012bpr},BPR: Bayesian Personalized Ranking from Implicit Feedback,http://arxiv.org/abs/1205.2618v1,"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive knearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.",True,True,"Rendle, Steffen and Freudenthaler, Christoph and Gantner, Zeno and Schmidt-Thieme, Lars",2012.0,,,,arXiv preprint arXiv:1205.2618,BPR: Bayesian Personalized Ranking from Implicit Feedback,BPR: Bayesian Personalized Ranking from Implicit Feedback,http://arxiv.org/pdf/1205.2618v1,"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive knearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,neurcf,\cite{neurcf},Neural Collaborative Filtering,http://arxiv.org/abs/1708.05031v2,"In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.",True,True,"He, Xiangnan and Liao, Lizi and Zhang, Hanwang and Nie, Liqiang and Hu, Xia and Chua, Tat-Seng",2017.0,,,,,Neural Collaborative Filtering,Neural Collaborative Filtering,http://arxiv.org/pdf/1708.05031v2,"In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,chen2017attentive,\cite{chen2017attentive},Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention,,,True,False,"Chen, Jingyuan and Zhang, Hanwang and He, Xiangnan and Nie, Liqiang and Liu, Wei and Chua, Tat-Seng",2017.0,,,,,Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention,ChenJingyuan91/ACF: Attentive collaborative filtering - GitHub,https://github.com/ChenJingyuan91/ACF,Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. 71 stars 31 forks "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,he2018nais,\cite{he2018nais},NAIS: Neural Attentive Item Similarity Model for Recommendation,http://arxiv.org/abs/1809.07053v1,"Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM), our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems.",True,True,"He, Xiangnan and He, Zhankui and Song, Jingkuan and Liu, Zhenguang and Jiang, Yu-Gang and Chua, Tat-Seng",2018.0,,,,TKDE,NAIS: Neural Attentive Item Similarity Model for Recommendation,NAIS: Neural Attentive Item Similarity Model for Recommendation,http://arxiv.org/pdf/1809.07053v1,"Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM), our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,wu2023survey,\cite{wu2023survey},A survey on graph embedding techniques for biomedical data: Methods and applications,,,True,False,"Wu, Yaozu and Chen, Yankai and Yin, Zhishuai and Ding, Weiping and King, Irwin",2023.0,,,,Information Fusion,A survey on graph embedding techniques for biomedical data: Methods and applications,A survey on graph embedding techniques for biomedical ...,https://www.sciencedirect.com/science/article/pii/S1566253523002257,"by Y Wu · 2023 · Cited by 27 — In this article, we focus on the application of graph data in the biomedical domain and mainly introduce recent developments of graph embedding techniques." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,zhang2024geometric,\cite{zhang2024geometric},Geometric view of soft decorrelation in self-supervised learning,,,True,False,"Zhang, Yifei and Zhu, Hao and Song, Zixing and Chen, Yankai and Fu, Xinyu and Meng, Ziqiao and Koniusz, Piotr and King, Irwin",2024.0,,,,,Geometric view of soft decorrelation in self-supervised learning,Geometric View of Soft Decorrelation in Self-Supervised Learning,https://dl.acm.org/doi/pdf/10.1145/3637528.3671914,We introduce a novel geometric perspective to analyze the limitations of soft decorrelation in preventing dimensional collapse in self- "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,zhang2023contrastive,\cite{zhang2023contrastive},Contrastive cross-scale graph knowledge synergy,,,True,False,"Zhang, Yifei and Chen, Yankai and Song, Zixing and King, Irwin",2023.0,,,,,Contrastive cross-scale graph knowledge synergy,Contrastive Cross-scale Graph Knowledge Synergy,https://dl.acm.org/doi/10.1145/3580305.3599286,"We propose Cross-Scale Contrastive Graph Knowledge Synergy (CGKS), a generic feature learning framework, to advance graph contrastive learning." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,graphsage,\cite{graphsage},Inductive Representation Learning in Large Attributed Graphs,http://arxiv.org/abs/1710.09471v2,"Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks.",True,True,"Hamilton, William L and Ying, Rex and Leskovec, Jure",2017.0,,,,,Inductive Representation Learning in Large Attributed Graphs,Inductive Representation Learning in Large Attributed Graphs,http://arxiv.org/pdf/1710.09471v2,"Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,kipf2016semi,\cite{kipf2016semi},Semi-Supervised Classification with Graph Convolutional Networks,http://arxiv.org/abs/1609.02907v4,"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",True,True,"Kipf, Thomas N and Welling, Max",2017.0,,,,,Semi-Supervised Classification with Graph Convolutional Networks,Semi-Supervised Classification with Graph Convolutional Networks,https://openreview.net/forum?id=SJU4ayYgl,"Semi-Supervised Classification with Graph Convolutional Networks | OpenReview Semi-Supervised Classification with Graph Convolutional Networks Abstract:We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. TL;DR:Semi-supervised classification with a CNN model for graphs. About OpenReview To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Select a topic or type what you need help with Cancel Send * Sponsors About OpenReview Sponsors To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Select a topic or type what you need help with Cancel Send We gratefully acknowledge the support of theOpenReview Sponsors." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,wang2022aep,\cite{wang2022aep},AEP: Aligning knowledge graphs via embedding propagation,,,True,False,"Wang, Chenxu and Wan, Yue and Huang, Zhenhao and Meng, Panpan and Wang, Pinghui",2022.0,,,,Neurocomputing,AEP: Aligning knowledge graphs via embedding propagation,AEP: Aligning knowledge graphs via embedding propagation - Details,http://ir.xjtu.edu.cn/item/ir/491648,"Knowledge graph alignment aims to identify entity pairs having the same meaning between different knowledge graphs, which is essential to the automated" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,chen2025semi,\cite{chen2025semi},"Semi-supervised Node Importance Estimation with Informative Distribution Modeling for Uncertainty Regularization",http://arxiv.org/abs/2503.20697v2,"Node importance estimation, a classical problem in network analysis, underpins various web applications. Previous methods either exploit intrinsic topological characteristics, e.g., graph centrality, or leverage additional information, e.g., data heterogeneity, for node feature enhancement. However, these methods follow the supervised learning setting, overlooking the fact that ground-truth node-importance data are usually partially labeled in practice. In this work, we propose the first semi-supervised node importance estimation framework, i.e., EASING, to improve learning quality for unlabeled data in heterogeneous graphs. Different from previous approaches, EASING explicitly captures uncertainty to reflect the confidence of model predictions. To jointly estimate the importance values and uncertainties, EASING incorporates DJE, a deep encoder-decoder neural architecture. DJE introduces distribution modeling for graph nodes, where the distribution representations derive both importance and uncertainty estimates. Additionally, DJE facilitates effective pseudo-label generation for the unlabeled data to enrich the training samples. Based on labeled and pseudo-labeled data, EASING develops effective semi-supervised heteroscedastic learning with varying node uncertainty regularization. Extensive experiments on three real-world datasets highlight the superior performance of EASING compared to competing methods. Codes are available via https://github.com/yankai-chen/EASING.",True,True,"Chen, Yankai and Wang, Taotao and Fang, Yixiang and Xiao, Yunyu",2025.0,,,,,"Semi-supervised Node Importance Estimation with Informative Distribution Modeling for Uncertainty Regularization",Semi-supervised Node Importance Estimation with Informative...,https://openreview.net/forum?id=iHaHRqQmN4&referrer=%5Bthe%20profile%20of%20Yankai%20Chen%5D(%2Fprofile%3Fid%3D~Yankai_Chen2),"This paper addresses semi-supervised node importance estimation in heterogeneous graphs, focusing on learning with limited labeled data and incorporating" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,berg2017graph,\cite{berg2017graph},Graph Convolutional Matrix Completion,http://arxiv.org/abs/1706.02263v2,"We consider matrix completion for recommender systems from the point of view of link prediction on graphs. Interaction data such as movie ratings can be represented by a bipartite user-item graph with labeled edges denoting observed ratings. Building on recent progress in deep learning on graph-structured data, we propose a graph auto-encoder framework based on differentiable message passing on the bipartite interaction graph. Our model shows competitive performance on standard collaborative filtering benchmarks. In settings where complimentary feature information or structured data such as a social network is available, our framework outperforms recent state-of-the-art methods.",True,True,"Berg, Rianne van den and Kipf, Thomas N and Welling, Max",2017.0,,,,arXiv preprint arXiv:1706.02263,Graph Convolutional Matrix Completion,Graph Convolutional Matrix Completion,http://arxiv.org/pdf/1706.02263v2,"We consider matrix completion for recommender systems from the point of view of link prediction on graphs. Interaction data such as movie ratings can be represented by a bipartite user-item graph with labeled edges denoting observed ratings. Building on recent progress in deep learning on graph-structured data, we propose a graph auto-encoder framework based on differentiable message passing on the bipartite interaction graph. Our model shows competitive performance on standard collaborative filtering benchmarks. In settings where complimentary feature information or structured data such as a social network is available, our framework outperforms recent state-of-the-art methods." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,ngcf,\cite{ngcf},Neural Graph Collaborative Filtering,http://arxiv.org/abs/1905.08108v2,"Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering.",True,True,"Wang, Xiang and He, Xiangnan and Wang, Meng and Feng, Fuli and Chua, Tat-Seng",2019.0,,,,,Neural Graph Collaborative Filtering,Neural Graph Collaborative Filtering,http://arxiv.org/pdf/1905.08108v2,"Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,dgcf,\cite{dgcf},Disentangled Graph Collaborative Filtering,http://arxiv.org/abs/2007.01764v1,"Learning informative representations of users and items from the interaction data is of crucial importance to collaborative filtering (CF). Present embedding functions exploit user-item relationships to enrich the representations, evolving from a single user-item instance to the holistic interaction graph. Nevertheless, they largely model the relationships in a uniform manner, while neglecting the diversity of user intents on adopting the items, which could be to pass time, for interest, or shopping for others like families. Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations. In this work, we pay special attention to user-item relationships at the finer granularity of user intents. We hence devise a new model, Disentangled Graph Collaborative Filtering (DGCF), to disentangle these factors and yield disentangled representations. Specifically, by modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations. Meanwhile, we encourage independence of different intents. This leads to disentangled representations, effectively distilling information pertinent to each intent. We conduct extensive experiments on three benchmark datasets, and DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE. Further analyses offer insights into the advantages of DGCF on the disentanglement of user intents and interpretability of representations. Our codes are available in https://github.com/xiangwang1223/disentangled_graph_collaborative_filtering.",True,True,"Wang, Xiang and Jin, Hongye and Zhang, An and He, Xiangnan and Xu, Tong and Chua, Tat-Seng",2020.0,,,,,Disentangled Graph Collaborative Filtering,Disentangled Graph Collaborative Filtering,http://arxiv.org/pdf/2007.01764v1,"Learning informative representations of users and items from the interaction data is of crucial importance to collaborative filtering (CF). Present embedding functions exploit user-item relationships to enrich the representations, evolving from a single user-item instance to the holistic interaction graph. Nevertheless, they largely model the relationships in a uniform manner, while neglecting the diversity of user intents on adopting the items, which could be to pass time, for interest, or shopping for others like families. Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations. In this work, we pay special attention to user-item relationships at the finer granularity of user intents. We hence devise a new model, Disentangled Graph Collaborative Filtering (DGCF), to disentangle these factors and yield disentangled representations. Specifically, by modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations. Meanwhile, we encourage independence of different intents. This leads to disentangled representations, effectively distilling information pertinent to each intent. We conduct extensive experiments on three benchmark datasets, and DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE. Further analyses offer insights into the advantages of DGCF on the disentanglement of user intents and interpretability of representations. Our codes are available in https://github.com/xiangwang1223/disentangled_graph_collaborative_filtering." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,lightgcn,\cite{lightgcn},"LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation",http://arxiv.org/abs/2002.02126v4,"Graph Convolution Network (GCN) has become new state-of-the-art for collaborative filtering. Nevertheless, the reasons of its effectiveness for recommendation are not well understood. Existing work that adapts GCN to recommendation lacks thorough ablation analyses on GCN, which is originally designed for graph classification tasks and equipped with many neural network operations. However, we empirically find that the two most common designs in GCNs -- feature transformation and nonlinear activation -- contribute little to the performance of collaborative filtering. Even worse, including them adds to the difficulty of training and degrades recommendation performance. In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation. We propose a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation -- for collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly propagating them on the user-item interaction graph, and uses the weighted sum of the embeddings learned at all layers as the final embedding. Such simple, linear, and neat model is much easier to implement and train, exhibiting substantial improvements (about 16.0\% relative improvement on average) over Neural Graph Collaborative Filtering (NGCF) -- a state-of-the-art GCN-based recommender model -- under exactly the same experimental setting. Further analyses are provided towards the rationality of the simple LightGCN from both analytical and empirical perspectives.",True,True,"He, Xiangnan and Deng, Kuan and Wang, Xiang and Li, Yan and Zhang, Yongdong and Wang, Meng",2020.0,,,,,"LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation",LightGCN: Simplifying and Powering Graph Convolution ...,https://dl.acm.org/doi/10.1145/3397271.3401063,"We propose a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation -- for collaborative filtering." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,lsh,\cite{lsh},Similarity search in high dimensions via hashing,,,True,False,"Gionis, Aristides and Indyk, Piotr and Motwani, Rajeev and others",1999.0,,,,,Similarity search in high dimensions via hashing,[PDF] Similarity Search in High Dimensions via Hashing - cs.Princeton,https://www.cs.princeton.edu/courses/archive/spring13/cos598C/Gionis.pdf,"This paper proposes a novel scheme for approximate similarity search in high dimensions using hashing, where objects are represented as points in a high-" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,hashnet,\cite{hashnet},HashNet: Deep Learning to Hash by Continuation,http://arxiv.org/abs/1702.00758v4,"Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks.",True,True,"Cao, Zhangjie and Long, Mingsheng and Wang, Jianmin and Yu, Philip S",2017.0,,,,,HashNet: Deep Learning to Hash by Continuation,HashNet: Deep Learning to Hash by Continuation,http://arxiv.org/pdf/1702.00758v4,"Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,li2014two,\cite{li2014two},Two-Stage Hashing for Fast Document Retrieval.,,,True,False,"Li, Hao and Liu, Wei and Ji, Heng",2014.0,,,,,Two-Stage Hashing for Fast Document Retrieval.,[PDF] Two-Stage Hashing for Fast Document Retrieval,https://blender.cs.illinois.edu/paper/hashing.pdf,The primary contribution is to propose a two-stage unsupervised hashing framework which harmoniously integrates two state-of-the- art hashing algorithms "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,kang2021learning,\cite{kang2021learning},"Learning to Embed Categorical Features without Embedding Tables for Recommendation",http://arxiv.org/abs/2010.10784v2,"Embedding learning of categorical features (e.g. user/item IDs) is at the core of various recommendation models including matrix factorization and neural collaborative filtering. The standard approach creates an embedding table where each row represents a dedicated embedding vector for every unique feature value. However, this method fails to efficiently handle high-cardinality features and unseen feature values (e.g. new video ID) that are prevalent in real-world recommendation systems. In this paper, we propose an alternative embedding framework Deep Hash Embedding (DHE), replacing embedding tables by a deep embedding network to compute embeddings on the fly. DHE first encodes the feature value to a unique identifier vector with multiple hashing functions and transformations, and then applies a DNN to convert the identifier vector to an embedding. The encoding module is deterministic, non-learnable, and free of storage, while the embedding network is updated during the training time to learn embedding generation. Empirical results show that DHE achieves comparable AUC against the standard one-hot full embedding, with smaller model sizes. Our work sheds light on the design of DNN-based alternative embedding schemes for categorical features without using embedding table lookup.",True,True,"Kang, Wang-Cheng and Cheng, Derek Zhiyuan and Yao, Tiansheng and Yi, Xinyang and Chen, Ting and Hong, Lichan and Chi, Ed H",2021.0,,,,SIGKDD,"Learning to Embed Categorical Features without Embedding Tables for Recommendation",Learning to Embed Categorical Features without Embedding Tables ...,https://arxiv.org/abs/2010.10784,"In this paper, we propose an alternative embedding framework Deep Hash Embedding (DHE), replacing embedding tables by a deep embedding network to compute" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,zhang2016discrete,\cite{zhang2016discrete},Discrete collaborative filtering,,,True,False,"Zhang, Hanwang and Shen, Fumin and Liu, Wei and He, Xiangnan and Luan, Huanbo and Chua, Tat-Seng",2016.0,,,,,Discrete collaborative filtering,Discrete Collaborative Filtering - ACM Digital Library,https://dl.acm.org/doi/10.1145/2911451.2911502,"In this paper, we propose a principled CF hashing framework called Discrete Collaborative Filtering (DCF), which directly tackles the challenging discrete" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,zhang2017discrete,\cite{zhang2017discrete},Discrete personalized ranking for fast collaborative filtering from implicit feedback,,,True,False,"Zhang, Yan and Lian, Defu and Yang, Guowu",2017.0,,,,,Discrete personalized ranking for fast collaborative filtering from implicit feedback,Discrete Personalized Ranking for Fast Collaborative Filtering from ...,https://ojs.aaai.org/index.php/AAAI/article/view/10764,"To this end, we propose a learning-based hashing framework called Discrete Personalized Ranking (DPR), to map users and items to a Hamming space" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,li2019learning,\cite{li2019learning},Learning binary codes with neural collaborative filtering for efficient recommendation systems,,,True,False,"Li, Yang and Wang, Suhang and Pan, Quan and Peng, Haiyun and Yang, Tao and Cambria, Erik",2019.0,,,,KBS,Learning binary codes with neural collaborative filtering for efficient recommendation systems,Learning binary codes with neural collaborative filtering for efficient ...,https://www.sciencedirect.com/science/article/pii/S0950705119300735,"In this paper, we investigate binary codes with neural collaborative filtering for an efficient recommendation. The work is related to hashing" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,kang2019candidate,\cite{kang2019candidate},"Candidate Generation with Binary Codes for Large-Scale Top-N Recommendation",http://arxiv.org/abs/1909.05475v1,"Generating the Top-N recommendations from a large corpus is computationally expensive to perform at scale. Candidate generation and re-ranking based approaches are often adopted in industrial settings to alleviate efficiency problems. However it remains to be fully studied how well such schemes approximate complete rankings (or how many candidates are required to achieve a good approximation), or to develop systematic approaches to generate high-quality candidates efficiently. In this paper, we seek to investigate these questions via proposing a candidate generation and re-ranking based framework (CIGAR), which first learns a preference-preserving binary embedding for building a hash table to retrieve candidates, and then learns to re-rank the candidates using real-valued ranking models with a candidate-oriented objective. We perform a comprehensive study on several large-scale real-world datasets consisting of millions of users/items and hundreds of millions of interactions. Our results show that CIGAR significantly boosts the Top-N accuracy against state-of-the-art recommendation models, while reducing the query time by orders of magnitude. We hope that this work could draw more attention to the candidate generation problem in recommender systems.",True,True,"Kang, Wang-Cheng and McAuley, Julian",2019.0,,,,,"Candidate Generation with Binary Codes for Large-Scale Top-N Recommendation",Candidate Generation with Binary Codes for Large-Scale Top-N ...,https://dl.acm.org/doi/10.1145/3357384.3357930,"Our results show that CIGAR significantly boosts the Top-N accuracy against state-of-the-art recommendation models, while reducing the query time by orders of" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,hashgnn,\cite{hashgnn},Learning to hash with GNNs for recommender systems,,,True,False,"Tan, Qiaoyu and Liu, Ninghao and Zhao, Xing and Yang, Hongxia and Zhou, Jingren and Hu, Xia",2020.0,,,,,Learning to hash with GNNs for recommender systems,Learning to Hash with Graph Neural Networks for Recommender ...,https://dl.acm.org/doi/10.1145/3366423.3380266,"In this work, we investigate the problem of hashing with graph neural networks (GNNs) for high quality retrieval, and propose a simple yet effective discrete" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,wu2020comprehensive,\cite{wu2020comprehensive},A Comprehensive Survey on Graph Neural Networks,http://arxiv.org/abs/1901.00596v4,"Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.",True,True,"Wu, Zonghan and Pan, Shirui and Chen, Fengwen and Long, Guodong and Zhang, Chengqi and Philip, S Yu",2020.0,,,,IEEE TNNLS,A Comprehensive Survey on Graph Neural Networks,A Comprehensive Survey on Graph Neural Networks,http://arxiv.org/pdf/1901.00596v4,"Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,wang2024uncertainty,\cite{wang2024uncertainty},Uncertainty in Graph Neural Networks: A Survey,http://arxiv.org/abs/2403.07185v2,"Graph Neural Networks (GNNs) have been extensively used in various real-world applications. However, the predictive uncertainty of GNNs stemming from diverse sources such as inherent randomness in data and model training errors can lead to unstable and erroneous predictions. Therefore, identifying, quantifying, and utilizing uncertainty are essential to enhance the performance of the model for the downstream tasks as well as the reliability of the GNN predictions. This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty with an emphasis on its integration in graph learning. We compare and summarize existing graph uncertainty theory and methods, alongside the corresponding downstream tasks. Thereby, we bridge the gap between theory and practice, meanwhile connecting different GNN communities. Moreover, our work provides valuable insights into promising directions in this field.",True,True,Fangxin Wang and Yuqing Liu and Kay Liu and Yibo Wang and Sourav Medya and Philip S. Yu,2024.0,,,,TMLR,Uncertainty in Graph Neural Networks: A Survey,Uncertainty in Graph Neural Networks: A Survey,http://arxiv.org/pdf/2403.07185v2,"Graph Neural Networks (GNNs) have been extensively used in various real-world applications. However, the predictive uncertainty of GNNs stemming from diverse sources such as inherent randomness in data and model training errors can lead to unstable and erroneous predictions. Therefore, identifying, quantifying, and utilizing uncertainty are essential to enhance the performance of the model for the downstream tasks as well as the reliability of the GNN predictions. This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty with an emphasis on its integration in graph learning. We compare and summarize existing graph uncertainty theory and methods, alongside the corresponding downstream tasks. Thereby, we bridge the gap between theory and practice, meanwhile connecting different GNN communities. Moreover, our work provides valuable insights into promising directions in this field." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,qiu2024hihpq,\cite{qiu2024hihpq},"HiHPQ: Hierarchical Hyperbolic Product Quantization for Unsupervised Image Retrieval",http://arxiv.org/abs/2401.07212v1,"Existing unsupervised deep product quantization methods primarily aim for the increased similarity between different views of the identical image, whereas the delicate multi-level semantic similarities preserved between images are overlooked. Moreover, these methods predominantly focus on the Euclidean space for computational convenience, compromising their ability to map the multi-level semantic relationships between images effectively. To mitigate these shortcomings, we propose a novel unsupervised product quantization method dubbed \textbf{Hi}erarchical \textbf{H}yperbolic \textbf{P}roduct \textbf{Q}uantization (HiHPQ), which learns quantized representations by incorporating hierarchical semantic similarity within hyperbolic geometry. Specifically, we propose a hyperbolic product quantizer, where the hyperbolic codebook attention mechanism and the quantized contrastive learning on the hyperbolic product manifold are introduced to expedite quantization. Furthermore, we propose a hierarchical semantics learning module, designed to enhance the distinction between similar and non-matching images for a query by utilizing the extracted hierarchical semantics as an additional training supervision. Experiments on benchmarks show that our proposed method outperforms state-of-the-art baselines.",True,True,"Qiu, Zexuan and Liu, Jiahong and Chen, Yankai and King, Irwin",2024.0,,,,,"HiHPQ: Hierarchical Hyperbolic Product Quantization for Unsupervised Image Retrieval",HiHPQ: Hierarchical Hyperbolic Product Quantization for ...,https://ojs.aaai.org/index.php/AAAI/article/view/28261/28514,"by Z Qiu · 2024 · Cited by 11 — Abstract. Existing unsupervised deep product quantization methods primarily aim for the increased similarity between different views of the identical image, ...See more" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,chen2021towards,\cite{chen2021towards},Towards low-loss 1-bit quantization of user-item representations for top-k recommendation,,,True,False,"Chen, Yankai and Zhang, Yifei and Zhang, Yingxue and Guo, Huifeng and Li, Jingjie and Tang, Ruiming and He, Xiuqiang and King, Irwin",2021.0,,,,arXiv preprint arXiv:2112.01944,Towards low-loss 1-bit quantization of user-item representations for top-k recommendation,Towards Low-loss 1-bit Quantization of User-item Representations ...,https://arxiv.org/abs/2112.01944,"View a PDF of the paper titled Towards Low-loss 1-bit Quantization of User-item Representations for Top-K Recommendation, by Yankai Chen and 7 other authors As the target is to embed latent features in the discrete embedding space, developing quantization for user-item representations with a few low-precision integers confronts the challenge of high information loss, thus leading to unsatisfactory performance in Top-K recommendation. View a PDF of the paper titled Towards Low-loss 1-bit Quantization of User-item Representations for Top-K Recommendation, by Yankai Chen and 7 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] Core recommender toggle " "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,bigcn,\cite{bigcn},Bi-gcn: Binary graph convolutional network,,,True,False,"Wang, Junfu and Wang, Yunhong and Yang, Zhen and Yang, Liang and Guo, Yuanfang",2021.0,,,,,Bi-gcn: Binary graph convolutional network,Bi-GCN: Binary Graph Convolutional Network,http://arxiv.org/pdf/2010.07565v2,"Graph Neural Networks (GNNs) have achieved tremendous success in graph representation learning. Unfortunately, current GNNs usually rely on loading the entire attributed graph into network for processing. This implicit assumption may not be satisfied with limited memory resources, especially when the attributed graph is large. In this paper, we pioneer to propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features. Besides, the original matrix multiplications are revised to binary operations for accelerations. According to the theoretical analysis, our Bi-GCN can reduce the memory consumption by an average of ~30x for both the network parameters and input data, and accelerate the inference speed by an average of ~47x, on the citation networks. Meanwhile, we also design a new gradient approximation based back-propagation method to train our Bi-GCN well. Extensive experiments have demonstrated that our Bi-GCN can give a comparable performance compared to the full-precision baselines. Besides, our binarization approach can be easily applied to other GNNs, which has been verified in the experiments." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,bahri2021binary,\cite{bahri2021binary},Binary Graph Neural Networks,http://arxiv.org/abs/2012.15823v2,"Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data. As they generalize the operations of classical CNNs on grids to arbitrary topologies, GNNs also bring much of the implementation challenges of their Euclidean counterparts. Model size, memory footprint, and energy consumption are common concerns for many real-world applications. Network binarization allocates a single bit to parameters and activations, thus dramatically reducing the memory requirements (up to 32x compared to single-precision floating-point numbers) and maximizing the benefits of fast SIMD instructions on modern hardware for measurable speedups. However, in spite of the large body of work on binarization for classical CNNs, this area remains largely unexplored in geometric deep learning. In this paper, we present and evaluate different strategies for the binarization of graph neural networks. We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks. In particular, we present the first dynamic graph neural network in Hamming space, able to leverage efficient k-NN search on binary vectors to speed-up the construction of the dynamic graph. We further verify that the binary models offer significant savings on embedded devices. Our code is publicly available on Github.",True,True,"Bahri, Mehdi and Bahl, Ga{\'e}tan and Zafeiriou, Stefanos",2021.0,,,,,Binary Graph Neural Networks,Binary Graph Neural Networks,http://arxiv.org/pdf/2012.15823v2,"Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data. As they generalize the operations of classical CNNs on grids to arbitrary topologies, GNNs also bring much of the implementation challenges of their Euclidean counterparts. Model size, memory footprint, and energy consumption are common concerns for many real-world applications. Network binarization allocates a single bit to parameters and activations, thus dramatically reducing the memory requirements (up to 32x compared to single-precision floating-point numbers) and maximizing the benefits of fast SIMD instructions on modern hardware for measurable speedups. However, in spite of the large body of work on binarization for classical CNNs, this area remains largely unexplored in geometric deep learning. In this paper, we present and evaluate different strategies for the binarization of graph neural networks. We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks. In particular, we present the first dynamic graph neural network in Hamming space, able to leverage efficient k-NN search on binary vectors to speed-up the construction of the dynamic graph. We further verify that the binary models offer significant savings on embedded devices. Our code is publicly available on Github." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,chen2022learning,\cite{chen2022learning},"Learning Binarized Graph Representations with Multi-faceted Quantization Reinforcement for Top-K Recommendation",http://arxiv.org/abs/2206.02115v1,"Learning vectorized embeddings is at the core of various recommender systems for user-item matching. To perform efficient online inference, representation quantization, aiming to embed the latent features by a compact sequence of discrete numbers, recently shows the promising potentiality in optimizing both memory and computation overheads. However, existing work merely focuses on numerical quantization whilst ignoring the concomitant information loss issue, which, consequently, leads to conspicuous performance degradation. In this paper, we propose a novel quantization framework to learn Binarized Graph Representations for Top-K Recommendation (BiGeaR). BiGeaR introduces multi-faceted quantization reinforcement at the pre-, mid-, and post-stage of binarized representation learning, which substantially retains the representation informativeness against embedding binarization. In addition to saving the memory footprint, BiGeaR further develops solid online inference acceleration with bitwise operations, providing alternative flexibility for the realistic deployment. The empirical results over five large real-world benchmarks show that BiGeaR achieves about 22%~40% performance improvement over the state-of-the-art quantization-based recommender system, and recovers about 95%~102% of the performance capability of the best full-precision counterpart with over 8x time and space reduction.",True,True,"Chen, Yankai and Guo, Huifeng and Zhang, Yingxue and Ma, Chen and Tang, Ruiming and Li, Jingjie and King, Irwin",2022.0,,,,,"Learning Binarized Graph Representations with Multi-faceted Quantization Reinforcement for Top-K Recommendation",Learning Binarized Graph Representations with Multi- ...,https://arxiv.org/pdf/2206.02115,"by Y Chen · 2022 · Cited by 42 — In this paper, we propose a novel quantiza- tion framework to learn Binarized Graph Representations for Top-K. Recommendation (BiGeaR). BiGeaR" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,hinton2015distilling,\cite{hinton2015distilling},Distilling the Knowledge in a Neural Network,http://arxiv.org/abs/1503.02531v1,"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",True,True,"Hinton, Geoffrey and Vinyals, Oriol and Dean, Jeff",2015.0,,,,arXiv preprint arXiv:1503.02531,Distilling the Knowledge in a Neural Network,Distilling the Knowledge in a Neural Network,http://arxiv.org/pdf/1503.02531v1,"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,tian2023knowledge,\cite{tian2023knowledge},Knowledge Distillation on Graphs: A Survey,http://arxiv.org/abs/2302.00219v1,"Graph Neural Networks (GNNs) have attracted tremendous attention by demonstrating their capability to handle graph data. However, they are difficult to be deployed in resource-limited devices due to model sizes and scalability constraints imposed by the multi-hop data dependency. In addition, real-world graphs usually possess complex structural information and features. Therefore, to improve the applicability of GNNs and fully encode the complicated topological information, knowledge distillation on graphs (KDG) has been introduced to build a smaller yet effective model and exploit more knowledge from data, leading to model compression and performance improvement. Recently, KDG has achieved considerable progress with many studies proposed. In this survey, we systematically review these works. Specifically, we first introduce KDG challenges and bases, then categorize and summarize existing works of KDG by answering the following three questions: 1) what to distillate, 2) who to whom, and 3) how to distillate. Finally, we share our thoughts on future research directions.",True,True,"Tian, Yijun and Pei, Shichao and Zhang, Xiangliang and Zhang, Chuxu and Chawla, Nitesh",2023.0,,,,ACM Computing Surveys,Knowledge Distillation on Graphs: A Survey,Knowledge Distillation on Graphs: A Survey,http://arxiv.org/pdf/2302.00219v1,"Graph Neural Networks (GNNs) have attracted tremendous attention by demonstrating their capability to handle graph data. However, they are difficult to be deployed in resource-limited devices due to model sizes and scalability constraints imposed by the multi-hop data dependency. In addition, real-world graphs usually possess complex structural information and features. Therefore, to improve the applicability of GNNs and fully encode the complicated topological information, knowledge distillation on graphs (KDG) has been introduced to build a smaller yet effective model and exploit more knowledge from data, leading to model compression and performance improvement. Recently, KDG has achieved considerable progress with many studies proposed. In this survey, we systematically review these works. Specifically, we first introduce KDG challenges and bases, then categorize and summarize existing works of KDG by answering the following three questions: 1) what to distillate, 2) who to whom, and 3) how to distillate. Finally, we share our thoughts on future research directions." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,yang2020distilling,\cite{yang2020distilling},Distilling Knowledge from Graph Convolutional Networks,http://arxiv.org/abs/2003.10477v4,"Existing knowledge distillation methods focus on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, and have largely overlooked graph convolutional networks (GCN) that handle non-grid data. In this paper, we propose to our best knowledge the first dedicated approach to distilling knowledge from a pre-trained GCN model. To enable the knowledge transfer from the teacher GCN to the student, we propose a local structure preserving module that explicitly accounts for the topological semantics of the teacher. In this module, the local structure information from both the teacher and the student are extracted as distributions, and hence minimizing the distance between these distributions enables topology-aware knowledge transfer from the teacher, yielding a compact yet high-performance student model. Moreover, the proposed approach is readily extendable to dynamic graph models, where the input graphs for the teacher and the student may differ. We evaluate the proposed method on two different datasets using GCN models of different architectures, and demonstrate that our method achieves the state-of-the-art knowledge distillation performance for GCN models. Code is publicly available at https://github.com/ihollywhy/DistillGCN.PyTorch.",True,True,"Yang, Yiding and Qiu, Jiayan and Song, Mingli and Tao, Dacheng and Wang, Xinchao",2020.0,,,,,Distilling Knowledge from Graph Convolutional Networks,Distilling Knowledge from Graph Convolutional Networks,http://arxiv.org/pdf/2003.10477v4,"Existing knowledge distillation methods focus on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, and have largely overlooked graph convolutional networks (GCN) that handle non-grid data. In this paper, we propose to our best knowledge the first dedicated approach to distilling knowledge from a pre-trained GCN model. To enable the knowledge transfer from the teacher GCN to the student, we propose a local structure preserving module that explicitly accounts for the topological semantics of the teacher. In this module, the local structure information from both the teacher and the student are extracted as distributions, and hence minimizing the distance between these distributions enables topology-aware knowledge transfer from the teacher, yielding a compact yet high-performance student model. Moreover, the proposed approach is readily extendable to dynamic graph models, where the input graphs for the teacher and the student may differ. We evaluate the proposed method on two different datasets using GCN models of different architectures, and demonstrate that our method achieves the state-of-the-art knowledge distillation performance for GCN models. Code is publicly available at https://github.com/ihollywhy/DistillGCN.PyTorch." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,deng2021graph,\cite{deng2021graph},Graph-Free Knowledge Distillation for Graph Neural Networks,http://arxiv.org/abs/2105.07519v2,"Knowledge distillation (KD) transfers knowledge from a teacher network to a student by enforcing the student to mimic the outputs of the pretrained teacher on training data. However, data samples are not always accessible in many cases due to large data sizes, privacy, or confidentiality. Many efforts have been made on addressing this problem for convolutional neural networks (CNNs) whose inputs lie in a grid domain within a continuous space such as images and videos, but largely overlook graph neural networks (GNNs) that handle non-grid data with different topology structures within a discrete space. The inherent differences between their inputs make these CNN-based approaches not applicable to GNNs. In this paper, we propose to our best knowledge the first dedicated approach to distilling knowledge from a GNN without graph data. The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multivariate Bernoulli distribution. We then introduce a gradient estimator to optimize this framework. Essentially, the gradients w.r.t. graph structures are obtained by only using GNN forward-propagation without back-propagation, which means that GFKD is compatible with modern GNN libraries such as DGL and Geometric. Moreover, we provide the strategies for handling different types of prior knowledge in the graph data or the GNNs. Extensive experiments demonstrate that GFKD achieves the state-of-the-art performance for distilling knowledge from GNNs without training data.",True,True,"Deng, Xiang and Zhang, Zhongfei",2021.0,,,,IJCAI,Graph-Free Knowledge Distillation for Graph Neural Networks,Graph-Free Knowledge Distillation for Graph Neural Networks - arXiv,https://arxiv.org/abs/2105.07519,The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multivariate Bernoulli distribution. "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,joshi2022representation,\cite{joshi2022representation},On Representation Knowledge Distillation for Graph Neural Networks,http://arxiv.org/abs/2111.04964v4,"Knowledge distillation is a learning paradigm for boosting resource-efficient graph neural networks (GNNs) using more expressive yet cumbersome teacher models. Past work on distillation for GNNs proposed the Local Structure Preserving loss (LSP), which matches local structural relationships defined over edges across the student and teacher's node embeddings. This paper studies whether preserving the global topology of how the teacher embeds graph data can be a more effective distillation objective for GNNs, as real-world graphs often contain latent interactions and noisy edges. We propose Graph Contrastive Representation Distillation (G-CRD), which uses contrastive learning to implicitly preserve global topology by aligning the student node embeddings to those of the teacher in a shared representation space. Additionally, we introduce an expanded set of benchmarks on large-scale real-world datasets where the performance gap between teacher and student GNNs is non-negligible. Experiments across 4 datasets and 14 heterogeneous GNN architectures show that G-CRD consistently boosts the performance and robustness of lightweight GNNs, outperforming LSP (and a global structure preserving variant of LSP) as well as baselines from 2D computer vision. An analysis of the representational similarity among teacher and student embedding spaces reveals that G-CRD balances preserving local and global relationships, while structure preserving approaches are best at preserving one or the other. Our code is available at https://github.com/chaitjo/efficient-gnns",True,True,"Joshi, Chaitanya K and Liu, Fayao and Xun, Xu and Lin, Jie and Foo, Chuan Sheng",2022.0,,,,TNNLS,On Representation Knowledge Distillation for Graph Neural Networks,On Representation Knowledge Distillation for Graph Neural Networks,http://arxiv.org/pdf/2111.04964v4,"Knowledge distillation is a learning paradigm for boosting resource-efficient graph neural networks (GNNs) using more expressive yet cumbersome teacher models. Past work on distillation for GNNs proposed the Local Structure Preserving loss (LSP), which matches local structural relationships defined over edges across the student and teacher's node embeddings. This paper studies whether preserving the global topology of how the teacher embeds graph data can be a more effective distillation objective for GNNs, as real-world graphs often contain latent interactions and noisy edges. We propose Graph Contrastive Representation Distillation (G-CRD), which uses contrastive learning to implicitly preserve global topology by aligning the student node embeddings to those of the teacher in a shared representation space. Additionally, we introduce an expanded set of benchmarks on large-scale real-world datasets where the performance gap between teacher and student GNNs is non-negligible. Experiments across 4 datasets and 14 heterogeneous GNN architectures show that G-CRD consistently boosts the performance and robustness of lightweight GNNs, outperforming LSP (and a global structure preserving variant of LSP) as well as baselines from 2D computer vision. An analysis of the representational similarity among teacher and student embedding spaces reveals that G-CRD balances preserving local and global relationships, while structure preserving approaches are best at preserving one or the other. Our code is available at https://github.com/chaitjo/efficient-gnns" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,wang2024graph,\cite{wang2024graph},Graph contrastive learning with high-order feature interactions and adversarial Wasserstein-distance-based alignment,,,True,False,"Wang, Chenxu and Wan, Zhizhong and Meng, Panpan and Wang, Shihao and Wang, Zhanggong",2024.0,,,,IJMLC,Graph contrastive learning with high-order feature interactions and adversarial Wasserstein-distance-based alignment,Graph contrastive learning with high-order feature interactions and ...,https://www.researchgate.net/publication/386016895_Graph_contrastive_learning_with_high-order_feature_interactions_and_adversarial_Wasserstein-distance-based_alignment,we propose a novel GCL model with high-order feature interactions and adversarial Wasserstein-distance-based alignment. Our model employs DNNs "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,zhou2021distilling,\cite{zhou2021distilling},Distilling Holistic Knowledge with Graph Neural Networks,http://arxiv.org/abs/2108.05507v1,"Knowledge Distillation (KD) aims at transferring knowledge from a larger well-optimized teacher network to a smaller learnable student network.Existing KD methods have mainly considered two types of knowledge, namely the individual knowledge and the relational knowledge. However, these two types of knowledge are usually modeled independently while the inherent correlations between them are largely ignored. It is critical for sufficient student network learning to integrate both individual knowledge and relational knowledge while reserving their inherent correlation. In this paper, we propose to distill the novel holistic knowledge based on an attributed graph constructed among instances. The holistic knowledge is represented as a unified graph-based embedding by aggregating individual knowledge from relational neighborhood samples with graph neural networks, the student network is learned by distilling the holistic knowledge in a contrastive manner. Extensive experiments and ablation studies are conducted on benchmark datasets, the results demonstrate the effectiveness of the proposed method. The code has been published in https://github.com/wyc-ruiker/HKD",True,True,"Zhou, Sheng and Wang, Yucheng and Chen, Defang and Chen, Jiawei and Wang, Xin and Wang, Can and Bu, Jiajun",2021.0,,,,,Distilling Holistic Knowledge with Graph Neural Networks,Distilling Holistic Knowledge with Graph Neural Networks,http://arxiv.org/pdf/2108.05507v1,"Knowledge Distillation (KD) aims at transferring knowledge from a larger well-optimized teacher network to a smaller learnable student network.Existing KD methods have mainly considered two types of knowledge, namely the individual knowledge and the relational knowledge. However, these two types of knowledge are usually modeled independently while the inherent correlations between them are largely ignored. It is critical for sufficient student network learning to integrate both individual knowledge and relational knowledge while reserving their inherent correlation. In this paper, we propose to distill the novel holistic knowledge based on an attributed graph constructed among instances. The holistic knowledge is represented as a unified graph-based embedding by aggregating individual knowledge from relational neighborhood samples with graph neural networks, the student network is learned by distilling the holistic knowledge in a contrastive manner. Extensive experiments and ablation studies are conducted on benchmark datasets, the results demonstrate the effectiveness of the proposed method. The code has been published in https://github.com/wyc-ruiker/HKD" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,wu2022knowledge,\cite{wu2022knowledge},Knowledge distillation improves graph structure augmentation for graph neural networks,,,True,False,"Wu, Lirong and Lin, Haitao and Huang, Yufei and Li, Stan Z",2022.0,,,,NeurIPS,Knowledge distillation improves graph structure augmentation for graph neural networks,Knowledge distillation improves graph structure augmentation for ...,https://dl.acm.org/doi/10.5555/3600270.3601128,"Graph (structure) augmentation aims to perturb the graph structure through heuristic or probabilistic rules, enabling the nodes to capture" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,liu2024fine,\cite{liu2024fine},Fine-grained learning behavior-oriented knowledge distillation for graph neural networks,,,True,False,"Liu, Kang and Huang, Zhenhua and Wang, Chang-Dong and Gao, Beibei and Chen, Yunwen",2024.0,,,,TNNLS,Fine-grained learning behavior-oriented knowledge distillation for graph neural networks,Fine-Grained Learning Behavior-Oriented Knowledge Distillation for ...,https://www.researchgate.net/publication/382303538_Fine-Grained_Learning_Behavior-Oriented_Knowledge_Distillation_for_Graph_Neural_Networks,"Knowledge distillation (KD), as an effective compression technology, is used to reduce the resource consumption of graph neural networks (GNNs) and" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,feng2022freekd,\cite{feng2022freekd},FreeKD: Free-direction Knowledge Distillation for Graph Neural Networks,http://arxiv.org/abs/2206.06561v4,"Knowledge distillation (KD) has demonstrated its effectiveness to boost the performance of graph neural networks (GNNs), where its goal is to distill knowledge from a deeper teacher GNN into a shallower student GNN. However, it is actually difficult to train a satisfactory teacher GNN due to the well-known over-parametrized and over-smoothing issues, leading to invalid knowledge transfer in practical applications. In this paper, we propose the first Free-direction Knowledge Distillation framework via Reinforcement learning for GNNs, called FreeKD, which is no longer required to provide a deeper well-optimized teacher GNN. The core idea of our work is to collaboratively build two shallower GNNs in an effort to exchange knowledge between them via reinforcement learning in a hierarchical way. As we observe that one typical GNN model often has better and worse performances at different nodes during training, we devise a dynamic and free-direction knowledge transfer strategy that consists of two levels of actions: 1) node-level action determines the directions of knowledge transfer between the corresponding nodes of two networks; and then 2) structure-level action determines which of the local structures generated by the node-level actions to be propagated. In essence, our FreeKD is a general and principled framework which can be naturally compatible with GNNs of different architectures. Extensive experiments on five benchmark datasets demonstrate our FreeKD outperforms two base GNNs in a large margin, and shows its efficacy to various GNNs. More surprisingly, our FreeKD has comparable or even better performance than traditional KD algorithms that distill knowledge from a deeper and stronger teacher GNN.",True,True,"Feng, Kaituo and Li, Changsheng and Yuan, Ye and Wang, Guoren",2022.0,,,,,FreeKD: Free-direction Knowledge Distillation for Graph Neural Networks,Free-direction Knowledge Distillation for Graph Neural Networks,https://arxiv.org/abs/2206.06561,"In this paper, we propose the first Free-direction Knowledge Distillation framework via Reinforcement learning for GNNs, called FreeKD." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,hu2020creating,\cite{hu2020creating},"Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing",http://arxiv.org/abs/2004.00280v1,"In recent years, cross-modal hashing (CMH) has attracted increasing attentions, mainly because its potential ability of mapping contents from different modalities, especially in vision and language, into the same space, so that it becomes efficient in cross-modal data retrieval. There are two main frameworks for CMH, differing from each other in whether semantic supervision is required. Compared to the unsupervised methods, the supervised methods often enjoy more accurate results, but require much heavier labors in data annotation. In this paper, we propose a novel approach that enables guiding a supervised method using outputs produced by an unsupervised method. Specifically, we make use of teacher-student optimization for propagating knowledge. Experiments are performed on two popular CMH benchmarks, i.e., the MIRFlickr and NUS-WIDE datasets. Our approach outperforms all existing unsupervised methods by a large margin.",True,True,"Hu, Hengtong and Xie, Lingxi and Hong, Richang and Tian, Qi",2020.0,,,,,"Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing",[PDF] Unsupervised Knowledge Distillation for Cross-Modal Hashing,https://openaccess.thecvf.com/content_CVPR_2020/papers/Hu_Creating_Something_From_Nothing_Unsupervised_Knowledge_Distillation_for_Cross-Modal_Hashing_CVPR_2020_paper.pdf,"This paper proposes using an unsupervised method to guide a supervised cross-modal hashing method, using teacher-student optimization and ""creating something" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,su2021semi,\cite{su2021semi},Semi-supervised knowledge distillation for cross-modal hashing,,,True,False,"Su, Mingyue and Gu, Guanghua and Ren, Xianlong and Fu, Hao and Zhao, Yao",2021.0,,,,IEEE Transactions on Multimedia,Semi-supervised knowledge distillation for cross-modal hashing,Semi-Supervised Knowledge Distillation for Cross-Modal Hashing,https://www.researchgate.net/publication/356453614_Semi-Supervised_Knowledge_Distillation_for_Cross-Modal_Hashing,"In this paper, we propose a novel semi-supervised approach called semi-supervised knowledge distillation for cross-modal hashing (SKDCH) to overcome the above-" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,jang2022deep,\cite{jang2022deep},Deep Hash Distillation for Image Retrieval,http://arxiv.org/abs/2112.08816v2,"In hash-based image retrieval systems, degraded or transformed inputs usually generate different codes from the original, deteriorating the retrieval accuracy. To mitigate this issue, data augmentation can be applied during training. However, even if augmented samples of an image are similar in real feature space, the quantization can scatter them far away in Hamming space. This results in representation discrepancies that can impede training and degrade performance. In this work, we propose a novel self-distilled hashing scheme to minimize the discrepancy while exploiting the potential of augmented data. By transferring the hash knowledge of the weakly-transformed samples to the strong ones, we make the hash code insensitive to various transformations. We also introduce hash proxy-based similarity learning and binary cross entropy-based quantization loss to provide fine quality hash codes. Ultimately, we construct a deep hashing framework that not only improves the existing deep hashing approaches, but also achieves the state-of-the-art retrieval results. Extensive experiments are conducted and confirm the effectiveness of our work.",True,True,"Jang, Young Kyun and Gu, Geonmo and Ko, Byungsoo and Kang, Isaac and Cho, Nam Ik",2022.0,,,,,Deep Hash Distillation for Image Retrieval,Deep Hash Distillation for Image Retrieval,http://arxiv.org/pdf/2112.08816v2,"In hash-based image retrieval systems, degraded or transformed inputs usually generate different codes from the original, deteriorating the retrieval accuracy. To mitigate this issue, data augmentation can be applied during training. However, even if augmented samples of an image are similar in real feature space, the quantization can scatter them far away in Hamming space. This results in representation discrepancies that can impede training and degrade performance. In this work, we propose a novel self-distilled hashing scheme to minimize the discrepancy while exploiting the potential of augmented data. By transferring the hash knowledge of the weakly-transformed samples to the strong ones, we make the hash code insensitive to various transformations. We also introduce hash proxy-based similarity learning and binary cross entropy-based quantization loss to provide fine quality hash codes. Ultimately, we construct a deep hashing framework that not only improves the existing deep hashing approaches, but also achieves the state-of-the-art retrieval results. Extensive experiments are conducted and confirm the effectiveness of our work." "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,tan2022teacher,\cite{tan2022teacher},Teacher-student learning: Efficient hierarchical message aggregation hashing for cross-modal retrieval,,,True,False,"Tan, Wentao and Zhu, Lei and Li, Jingjing and Zhang, Huaxiang and Han, Junwei",2022.0,,,,IEEE Transactions on Multimedia,Teacher-student learning: Efficient hierarchical message aggregation hashing for cross-modal retrieval,"FutureTwT/HMAH: The source code of ""Teacher-Student ...",https://github.com/FutureTwT/HMAH,"GitHub - FutureTwT/HMAH: The source code of ""Teacher-Student Learning: Efficient Hierarchical Message Aggregation Hashing for Cross-Modal Retrieval."" (Accepted by TMM 2022) * GitHub Copilot Write better code with AI * GitHub Advanced Security Enterprise-grade security features Search code, repositories, users, issues, pull requests... The source code of ""Teacher-Student Learning: Efficient Hierarchical Message Aggregation Hashing for Cross-Modal Retrieval."" (Accepted by TMM 2022) ### The source code of ""Teacher-Student Learning: Efficient Hierarchical Message Aggregation Hashing for Cross-Modal Retrieval"" If you want to run our code compared with all the cross-modal hashing retrieval baselines on three datasets, we suggest that you should refer the follow link. The source code of ""Teacher-Student Learning: Efficient Hierarchical Message Aggregation Hashing for Cross-Modal Retrieval."" (Accepted by TMM 2022)" "Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering",2506.02750v1,yu2024unsupervised,\cite{yu2024unsupervised},Unsupervised Multimodal Graph Contrastive Semantic Anchor Space Dynamic Knowledge Distillation Network for Cross-Media Hash Retrieval,,,True,False,"Yu, Yang and Liang, Meiyu and Yin, Mengran and Lu, Kangkang and Du, Junping and Xue, Zhe",2024.0,,,,,Unsupervised Multimodal Graph Contrastive Semantic Anchor Space Dynamic Knowledge Distillation Network for Cross-Media Hash Retrieval,Unsupervised Multimodal Graph Contrastive Semantic ...,https://openreview.net/forum?id=JgZ7Lu0YHF&referrer=%5Bthe%20profile%20of%20Junping%20Du%5D(%2Fprofile%3Fid%3D~Junping_Du1),Abstract: Cross-media hash retrieval are efficient and effective techniques for retrieval on multi-media database.See more "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Riccosan2023,\cite{Riccosan2023},Multilabel multiclass sentiment and emotion dataset from Indonesian mobile application review,,,True,False,Riccosan and K. E. Saputra,2023.0,,,10.1016/j.dib.2023.109576,Data in Brief,Multilabel multiclass sentiment and emotion dataset from Indonesian mobile application review,Multilabel multiclass sentiment and emotion dataset from indonesian ...,https://www.sciencedirect.com/science/article/pii/S2352340923006662,"This work creates a multi-label multi-class Indonesian-language dataset based on public reviews of mobile applications with sentiment and emotional values. Because process of annotating data in the form of sentences used in the creation of this dataset is one of the processes involved in identifying a kind of sentiment or emotion included in a sentence, it is also known as the Sentiment Analysis process [4] and this approach is also a type of text analysis since it attempts to extract some values (sentiment and emotion) from Indonesian sentences [5]. Multilabel Multiclass Sentiment and Emotion Dataset from Indonesian Mobile Application Review (Original data) (Zenodo) " "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Malgaonkar2019,\cite{Malgaonkar2019},Appsent A Tool That Analyzes App Reviews,http://arxiv.org/abs/1907.10191v1,"Enterprises are always on the lookout for tools that analyze end-users perspectives on their products. In particular, app reviews have been assessed as useful for guiding improvement efforts and software evolution, however, developers find reading app reviews to be a labor intensive exercise. If such a barrier is eliminated, however, evidence shows that responding to reviews enhances end-users satisfaction and contributes towards the success of products. In this paper, we present Appsent, a mobile analytics tool as an app, to facilitate the analysis of app reviews. This development was led by a literature review on the problem and subsequent evaluation of current available solutions to this challenge. Our investigation found that there was scope to extend currently available tools that analyze app reviews. These gaps thus informed the design and development of Appsent. We subsequently performed an empirical evaluation to validate Appsent usability and the helpfulness of analytics features from users perspective. Outcomes of this evaluation reveal that Appsent provides user-friendly interfaces, helpful functionalities and meaningful analytics. Appsent extracts and visualizes important perceptions from end-users feedback, identifying insights into end-users opinions about various aspects of software features. Although Appsent was developed as a prototype for analyzing app reviews, this tool may be of utility for analyzing product reviews more generally.",True,True,Saurabh Malgaonkar and Chan Won Lee and Sherlock A. Licorish and Bastin Tony Roy Savarimuthu and Amjed Tahir,2019.0,,https://arxiv.org/abs/1907.10191,,,Appsent A Tool That Analyzes App Reviews,Appsent A Tool That Analyzes App Reviews,http://arxiv.org/pdf/1907.10191v1,"Enterprises are always on the lookout for tools that analyze end-users perspectives on their products. In particular, app reviews have been assessed as useful for guiding improvement efforts and software evolution, however, developers find reading app reviews to be a labor intensive exercise. If such a barrier is eliminated, however, evidence shows that responding to reviews enhances end-users satisfaction and contributes towards the success of products. In this paper, we present Appsent, a mobile analytics tool as an app, to facilitate the analysis of app reviews. This development was led by a literature review on the problem and subsequent evaluation of current available solutions to this challenge. Our investigation found that there was scope to extend currently available tools that analyze app reviews. These gaps thus informed the design and development of Appsent. We subsequently performed an empirical evaluation to validate Appsent usability and the helpfulness of analytics features from users perspective. Outcomes of this evaluation reveal that Appsent provides user-friendly interfaces, helpful functionalities and meaningful analytics. Appsent extracts and visualizes important perceptions from end-users feedback, identifying insights into end-users opinions about various aspects of software features. Although Appsent was developed as a prototype for analyzing app reviews, this tool may be of utility for analyzing product reviews more generally." "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Keertipati2016,\cite{Keertipati2016},Approaches for prioritizing feature improvements extracted from app reviews,,,True,False,"Keertipati, Swetha and Savarimuthu, Bastin Tony Roy and Licorish, Sherlock A.",2016.0,,,10.1145/2915970.2916003,,Approaches for prioritizing feature improvements extracted from app reviews,Approaches for prioritizing feature improvements extracted from app ...,https://dl.acm.org/doi/10.1145/2915970.2916003,App reviews contain valuable feedback about what features should be fixed and improved. This feedback could be 'mined' to facilitate app maintenance and "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Singh2022101929,\cite{Singh2022101929},An empirical analysis of mobile learning app usage experience,,,True,False,Yashdeep Singh and Pradeep Kumar Suri,2022.0,,,10.1016/j.techsoc.2022.101929,Technology in Society,An empirical analysis of mobile learning app usage experience,An empirical analysis of mobile learning app usage ...,https://www.sciencedirect.com/science/article/pii/S0160791X22000707,"In light of the research gaps identified, we attempted to achieve the following research objectives: 1) to identify the most frequently used words in the app reviews; 2) to identify and analyze the significant factors which influence the mobile learning app usage experience; 3) to identify and analyze the emotions and sentiments involved in app usage experience; 4) to compare learner perception of public and private sector apps, and school and higher education apps. Firstly, this study's findings are based on the analysis of only 2000 reviews of only four mobile learning apps (two apps each in public and private sectors with one app each in school and higher education) of one app store (Google Play) in the Indian context." "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Savarimuthu2023,\cite{Savarimuthu2023},Improving Information Systems Sustainability by Applying Machine Learning to Detect and Reduce Data Waste,,,True,False,B. Savarimuthu and J. Corbett and M. Yasir and V. Lakshmi,2023.0,,,10.17705/1CAIS.05308,Communications of the Association for Information Systems,Improving Information Systems Sustainability by Applying Machine Learning to Detect and Reduce Data Waste,Communications of the Association for Information Systems' Post,https://www.linkedin.com/posts/communications-of-the-association-for-information-systems_improving-information-systems-sustainability-activity-7117532219040010240-dXL6,Improving Information Systems Sustainability by Applying Machine Learning to Detect and Reduce Data Waste. Communications of the Association "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Cabellos2022,\cite{Cabellos2022},"Do pro-social video games promote moral activity?: an analysis of user reviews of Papers, Please",,,True,False,B. Cabellos and J. I. Pozo and K. Mar{\'i}n-Rubio and others,2022.0,,,10.1007/s10639-022-11072-x,Education and Information Technologies,"Do pro-social video games promote moral activity?: an analysis of user reviews of Papers, Please",(PDF) Do pro-social video games promote moral activity?,https://www.researchgate.net/publication/360501989_Do_pro-social_video_games_promote_moral_activity_an_analysis_of_user_reviews_of_Papers_Please,"In particular, in this research, we set out to identify the potential of 'Papers, Please' to promote moral learning." "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Hou2024,\cite{Hou2024},Large Language Models for Software Engineering: A Systematic Literature Review,,,True,False,"Hou, Xinyi and Zhao, Yanjie and Liu, Yue and Yang, Zhou and Wang, Kailong and Li, Li and Luo, Xiapu and Lo, David and Grundy, John and Wang, Haoyu",2024.0,,,10.1145/3695988,ACM Trans. Softw. Eng. Methodol.,Large Language Models for Software Engineering: A Systematic Literature Review,Large Language Models for Software Engineering: A Systematic Literature Review,http://arxiv.org/pdf/2308.10620v6,"Large Language Models (LLMs) have significantly impacted numerous domains, including Software Engineering (SE). Many recent publications have explored LLMs applied to various SE tasks. Nevertheless, a comprehensive understanding of the application, effects, and possible limitations of LLMs on SE is still in its early stages. To bridge this gap, we conducted a systematic literature review (SLR) on LLM4SE, with a particular focus on understanding how LLMs can be exploited to optimize processes and outcomes. We select and analyze 395 research papers from January 2017 to January 2024 to answer four key research questions (RQs). In RQ1, we categorize different LLMs that have been employed in SE tasks, characterizing their distinctive features and uses. In RQ2, we analyze the methods used in data collection, preprocessing, and application, highlighting the role of well-curated datasets for successful LLM for SE implementation. RQ3 investigates the strategies employed to optimize and evaluate the performance of LLMs in SE. Finally, RQ4 examines the specific SE tasks where LLMs have shown success to date, illustrating their practical contributions to the field. From the answers to these RQs, we discuss the current state-of-the-art and trends, identifying gaps in existing research, and flagging promising areas for future study. Our artifacts are publicly available at https://github.com/xinyi-hou/LLM4SE_SLR." "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Heseltine2024,\cite{Heseltine2024},Large language models as a substitute for human experts in annotating political text,,,True,False,"Heseltine, Thomas and Hohenberg, John",2024.0,,,,Research \& Politics,Large language models as a substitute for human experts in annotating political text,Large language models as a substitute for human experts ...,https://dare.uva.nl/personal/pure/en/publications/large-language-models-as-a-substitute-for-human-experts-in-annotating-political-text(fec8b241-d408-4324-b688-7caa7f1a0b98).html,"However, advances in large language models (LLMs) may make automated annotation increasingly viable. This paper tests the performance of GPT-4 across a range of" "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Zhang2025,\cite{Zhang2025},"Revisiting Sentiment Analysis for Software Engineering in the Era of Large Language Models",http://arxiv.org/abs/2310.11113v3,"Software development involves collaborative interactions where stakeholders express opinions across various platforms. Recognizing the sentiments conveyed in these interactions is crucial for the effective development and ongoing maintenance of software systems. For software products, analyzing the sentiment of user feedback, e.g., reviews, comments, and forum posts can provide valuable insights into user satisfaction and areas for improvement. This can guide the development of future updates and features. However, accurately identifying sentiments in software engineering datasets remains challenging. This study investigates bigger large language models (bLLMs) in addressing the labeled data shortage that hampers fine-tuned smaller large language models (sLLMs) in software engineering tasks. We conduct a comprehensive empirical study using five established datasets to assess three open-source bLLMs in zero-shot and few-shot scenarios. Additionally, we compare them with fine-tuned sLLMs, using sLLMs to learn contextual embeddings of text from software platforms. Our experimental findings demonstrate that bLLMs exhibit state-of-the-art performance on datasets marked by limited training data and imbalanced distributions. bLLMs can also achieve excellent performance under a zero-shot setting. However, when ample training data is available or the dataset exhibits a more balanced distribution, fine-tuned sLLMs can still achieve superior results.",True,True,"Zhang, Ting and Irsan, Ivana Clairine and Thung, Ferdian and Lo, David",2025.0,,,10.1145/3697009,ACM Trans. Softw. Eng. Methodol.,"Revisiting Sentiment Analysis for Software Engineering in the Era of Large Language Models",Revisiting Sentiment Analysis for Software Engineering in the Era of ...,https://www.bohrium.com/paper-details/revisiting-sentiment-analysis-for-software-engineering-in-the-era-of-large-language-models/921465890266939827-108627,"With the emergence of large language models (LLMs), it is pertinent to investigate how these models perform in the context of sentiment analysis" "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Sayeed2024,\cite{Sayeed2024},{Annotating Materials Science Text: A Semi-automated Approach for Crafting Outputs with Gemini Pro},,,True,False,"Sayeed, Hasan M. and Mohanty, Trupti and Sparks, Taylor D.",2024.0,Jun,,10.1007/s40192-024-00356-4,Integrating Materials and Manufacturing Innovation,{Annotating Materials Science Text: A Semi-automated Approach for Crafting Outputs with Gemini Pro},Annotating Materials Science Text: A Semi-automated ...,https://www.youtube.com/shorts/J1VJ4eovLzM,Annotating Materials Science Text: A Semi-automated Approach for Crafting Outputs with Gemini Pro. "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Shan2024,\cite{Shan2024},Using Large Language Models to Automate Annotation and Part-of-Math Tagging of Math Equations,,,True,False,"Shan, Ruocheng and Youssef, Abdou",2024.0,,,10.1007/978-3-031-66997-2_1,,Using Large Language Models to Automate Annotation and Part-of-Math Tagging of Math Equations,Using Large Language Models to Automate Annotation and Part-of ...,https://link.springer.com/chapter/10.1007/978-3-031-66997-2_1,"Using Large Language Models to Automate Annotation and Part-of-Math Tagging of Math Equations | SpringerLink Using Large Language Models to Automate Annotation and Part-of-Math Tagging of Math Equations This paper explores the potential of leveraging Large Language Models (LLMs) for the tasks of automated annotation and Part-of-Math (POM) tagging of equations. 3. Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. Using Large Language Models to Automate Annotation and Part-of-Math Tagging of Math Equations. 3. Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp." "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Aguda2024,\cite{Aguda2024},Large Language Models as Financial Data Annotators: A Study on Effectiveness and Efficiency,,,True,False,"Aguda, Toyin D. and Siddagangappa, Suchetha and Kochkina, Elena and Kaur, Simerjot and Wang, Dongsheng and Smiley, Charese",2024.0,,https://aclanthology.org/2024.lrec-main.885/,,,Large Language Models as Financial Data Annotators: A Study on Effectiveness and Efficiency,Large Language Models as Financial Data Annotators: A Study on Effectiveness and Efficiency,http://arxiv.org/pdf/2403.18152v1,"Collecting labeled datasets in finance is challenging due to scarcity of domain experts and higher cost of employing them. While Large Language Models (LLMs) have demonstrated remarkable performance in data annotation tasks on general domain datasets, their effectiveness on domain specific datasets remains underexplored. To address this gap, we investigate the potential of LLMs as efficient data annotators for extracting relations in financial documents. We compare the annotations produced by three LLMs (GPT-4, PaLM 2, and MPT Instruct) against expert annotators and crowdworkers. We demonstrate that the current state-of-the-art LLMs can be sufficient alternatives to non-expert crowdworkers. We analyze models using various prompts and parameter settings and find that customizing the prompts for each relation group by providing specific examples belonging to those groups is paramount. Furthermore, we introduce a reliability index (LLM-RelIndex) used to identify outputs that may require expert attention. Finally, we perform an extensive time, cost and error analysis and provide recommendations for the collection and usage of automated annotations in domain-specific settings." "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Yu2024,\cite{Yu2024},"Assessing the potential of LLM-assisted annotation for corpus-based pragmatics and discourse analysis: The case of apology",http://arxiv.org/abs/2305.08339v5,"Certain forms of linguistic annotation, like part of speech and semantic tagging, can be automated with high accuracy. However, manual annotation is still necessary for complex pragmatic and discursive features that lack a direct mapping to lexical forms. This manual process is time-consuming and error-prone, limiting the scalability of function-to-form approaches in corpus linguistics. To address this, our study explores the possibility of using large language models (LLMs) to automate pragma-discursive corpus annotation. We compare GPT-3.5 (the model behind the free-to-use version of ChatGPT), GPT-4 (the model underpinning the precise mode of Bing chatbot), and a human coder in annotating apology components in English based on the local grammar framework. We find that GPT-4 outperformed GPT-3.5, with accuracy approaching that of a human coder. These results suggest that LLMs can be successfully deployed to aid pragma-discursive corpus annotation, making the process more efficient, scalable and accessible.",True,True,Danni Yu and Luyang Li and Hang Su and Matteo Fuoli,2024.0,December,,10.1075/ijcl.23087.yu,International Journal of Corpus Linguistics,"Assessing the potential of LLM-assisted annotation for corpus-based pragmatics and discourse analysis: The case of apology",Assessing the potential of LLM-assisted annotation for ...,https://benjamins.com/catalog/ijcl.23087.yu?srsltid=AfmBOooIotZsGLGzzNQZ3DaE9zu4rL9x1uXUjy42VGSZY2xLm7y232qS,Our study explores the possibility of using large language models (LLMs) to automate pragma-discursive corpus annotation. "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,kim-etal-2024-meganno,\cite{kim-etal-2024-meganno},{MEGA}nno+: A Human-{LLM} Collaborative Annotation System,,,True,False,"Kim, Hannah and Mitra, Kushan and Li Chen, Rafael and Rahman, Sajjadur and Zhang, Dan",2024.0,,,,,{MEGA}nno+: A Human-{LLM} Collaborative Annotation System,MEGAnno+: A Human-LLM Collaborative Annotation System,https://megagon.ai/publications/meganno-a-human-llm-collaborative-annotation-system/,"We present MEGAnno+, a human-LLM collaborative annotation system that offers effective LLM agent and annotation management, convenient and robust LLM annotation" "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Wang2024,\cite{Wang2024},Human-LLM Collaborative Annotation Through Effective Verification of LLM Labels,,,True,False,"Wang, Xinru and Kim, Hannah and Rahman, Sajjadur and Mitra, Kushan and Miao, Zhengjie",2024.0,,,10.1145/3613904.3641960,,Human-LLM Collaborative Annotation Through Effective Verification of LLM Labels,Human-LLM Collaborative Annotation Through Effective Verification ...,https://megagon.ai/human-llm-collab-annote-thru-llm-labels/,"Human-LLM Collaborative Annotation Through Effective Verification of LLM Labels - Megagon Human-LLM Collaborative Annotation Through Effective Verification of LLM Labels Human-LLM Collaborative Annotation Framework Step 2: Verifier assesses LLM labels and explanations.Step 3: Human annotators re-annotate instances with the lowest verifier scores. We propose a multi-step human-LLM collaborative framework for data annotation to ensure accuracy and trustworthiness. In the human re-annotation step, LLM explanations can help human annotators to understand and trust LLMs as collaborators [Wang et al., 2023]. * RQ2: Does providing LLM-generated labels and explanations help humans in re-annotation? We discussed how to design LLM-human collaborative annotation frameworks by leveraging a LLM’s label and self-explanation in automatic verification and re-annotation. Prev Previous LLMs as Data Annotators (Part 2) – MEGAnno+: A Human-LLM Collaborative Annotation System" "What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews",2505.23452v1,Rouzegar2024,\cite{Rouzegar2024},Enhancing Text Classification through LLM-Driven Active Learning and Human Annotation,,,True,False,Hamidreza Rouzegar and Masoud Makrehchi,2024.0,March,https://aclanthology.org/2024.law-1.10,10.18653/v1/2024.law-1.10,,Enhancing Text Classification through LLM-Driven Active Learning and Human Annotation,hrouzegar/Enhancing-Text-Classification-through-LLM-Driven ...,https://github.com/hrouzegar/Enhancing-Text-Classification-through-LLM-Driven-Active-Learning-and-Human-Annotation,We introduce a novel methodology integrating human annotators and GPT-3.5 annotations within an Active Learning framework for text classification. Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,wu2017recurrent,\cite{wu2017recurrent},Recurrent recommender networks,,,True,False,"Wu, Chao-Yuan and Ahmed, Amr and Beutel, Alex and Smola, Alexander J and Jing, How",2017.0,,,,,Recurrent recommender networks,Recurrent Recommender Networks - Google Research,https://research.google/pubs/recurrent-recommender-networks/,We propose Recurrent Recommender Networks (RRN) that are able to predict future behavioral trajectories. This is achieved by endowing both users and movies with Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,chung2014empirical,\cite{chung2014empirical},"Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling",http://arxiv.org/abs/1412.3555v1,"In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.",True,True,"Chung, Junyoung and Gulcehre, Caglar and Cho, KyungHyun and Bengio, Yoshua",2014.0,,,,arXiv preprint arXiv:1412.3555,"Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling",Empirical Evaluation of Gated Recurrent Neural Networks on ... - arXiv,https://arxiv.org/abs/1412.3555,"**arXiv:1412.3555** (cs) View a PDF of the paper titled Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, by Junyoung Chung and Caglar Gulcehre and KyungHyun Cho and Yoshua Bengio View a PDF of the paper titled Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, by Junyoung Chung and Caglar Gulcehre and KyungHyun Cho and Yoshua Bengio - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle " Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,hidasi2015session,\cite{hidasi2015session},Session-based Recommendations with Recurrent Neural Networks,http://arxiv.org/abs/1511.06939v4,"We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.",True,True,"Hidasi, B",2015.0,,,,arXiv preprint arXiv:1511.06939,Session-based Recommendations with Recurrent Neural Networks,Session-based Recommendations with Recurrent Neural Networks,https://www.semanticscholar.org/paper/Session-based-Recommendations-with-Recurrent-Neural-Hidasi-Karatzoglou/e0021d61c2ab1334bc725852edd44597f4c65dff,"It is argued that by modeling the whole session, more accurate recommendations can be provided by an RNN-based approach for session-based recommendations," Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,liu2018stamp,\cite{liu2018stamp},STAMP: short-term attention/memory priority model for session-based recommendation,,,True,False,"Liu, Qiao and Zeng, Yifu and Mokhosi, Refuoe and Zhang, Haibin",2018.0,,,,,STAMP: short-term attention/memory priority model for session-based recommendation,GitHub - uestcnlp/STAMP: Code for the KDD 2018 paper,https://github.com/uestcnlp/STAMP,"GitHub - uestcnlp/STAMP: Code for the KDD 2018 paper: STAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation * GitHub Advanced Security Enterprise-grade security features Search code, repositories, users, issues, pull requests... Code for the KDD 2018 paper: STAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation | Latest commit ------------- Image 1: uestcnlpuestcnlp Update README.md Jan 9, 2019 c0233b6·Jan 9, 2019 History ------- 19 Commits Open commit details | Repository files navigation This is the code for the KDD 2018 paper: STAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation. Beacuse for each dataset we have some different parameters, there are two model files `STAMP_rsc.py` and `STAMP_cikm.py`. Code for the KDD 2018 paper: STAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,li2017neural,\cite{li2017neural},Neural Attentive Session-based Recommendation,http://arxiv.org/abs/1711.04725v1,"Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user's sequential behavior in the current session, whereas the user's main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user's sequential behavior and capture the user's main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user's sequential behavior and main purpose simultaneously.",True,True,"Li, Jing and Ren, Pengjie and Chen, Zhumin and Ren, Zhaochun and Lian, Tao and Ma, Jun",2017.0,,,,,Neural Attentive Session-based Recommendation,[1711.04725] Neural Attentive Session-based Recommendation,https://arxiv.org/abs/1711.04725,"Image 2: arxiv logo>cs> arXiv:1711.04725 **arXiv:1711.04725** (cs) Authors:Jing Li, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma View a PDF of the paper titled Neural Attentive Session-based Recommendation, by Jing Li and 4 other authors View a PDF of the paper titled Neural Attentive Session-based Recommendation, by Jing Li and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,rendle2009bpr,\cite{rendle2009bpr},BPR: Bayesian Personalized Ranking from Implicit Feedback,http://arxiv.org/abs/1205.2618v1,"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive knearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.",True,True,"Rendle, Steffen and Freudenthaler, Christoph and Gantner, Zeno and Schmidt-Thieme, Lars",2009.0,,,,,BPR: Bayesian Personalized Ranking from Implicit Feedback,BPR: Bayesian Personalized Ranking from Implicit Feedback,http://arxiv.org/pdf/1205.2618v1,"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive knearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,vaswani2017attention,\cite{vaswani2017attention},Attention Is All You Need,http://arxiv.org/abs/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",True,True,"Vaswani, A",2017.0,,,,Advances in Neural Information Processing Systems,Attention Is All You Need,Attention Is All You Need,http://arxiv.org/pdf/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,kang2018self,\cite{kang2018self},Self-Attentive Sequential Recommendation,http://arxiv.org/abs/1808.09781v1,"Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.",True,True,"Kang, Wang-Cheng and McAuley, Julian",2018.0,,,,,Self-Attentive Sequential Recommendation,Self Attention on Recommendation System - Jeffery chiang,https://medium.com/analytics-vidhya/self-attention-on-recommendation-system-self-attentive-sequential-recommendation-review-c94796dde001,"Self-attention is a powerful mechanism used in deep learning to process sequential data, such as sentences or time-series data, by considering the relationship" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,sun2019bert4rec,\cite{sun2019bert4rec},"BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer",http://arxiv.org/abs/1904.06690v2,"Modeling users' dynamic and evolving preferences from their historical behaviors is challenging and crucial for recommendation systems. Previous methods employ sequential neural networks (e.g., Recurrent Neural Network) to encode users' historical interactions from left to right into hidden representations for making recommendations. Although these methods achieve satisfactory results, they often assume a rigidly ordered sequence which is not always practical. We argue that such left-to-right unidirectional architectures restrict the power of the historical sequence representations. For this purpose, we introduce a Bidirectional Encoder Representations from Transformers for sequential Recommendation (BERT4Rec). However, jointly conditioning on both left and right context in deep bidirectional model would make the training become trivial since each item can indirectly ""see the target item"". To address this problem, we train the bidirectional model using the Cloze task, predicting the masked items in the sequence by jointly conditioning on their left and right context. Comparing with predicting the next item at each position in a sequence, the Cloze task can produce more samples to train a more powerful bidirectional model. Extensive experiments on four benchmark datasets show that our model outperforms various state-of-the-art sequential models consistently.",True,True,"Sun, Fei and Liu, Jun and Wu, Jian and Pei, Changhua and Lin, Xiao and Ou, Wenwu and Jiang, Peng",2019.0,,,,,"BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer",BERT4Rec: Sequential Recommendation with Bidirectional Encoder ...,https://dl.acm.org/doi/10.1145/3357384.3357895,"We proposed a sequential recommendation model called BERT4Rec, which employs the deep bidirectional self-attention to model user behavior sequences." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,beltagy2020longformer,\cite{beltagy2020longformer},Longformer: The Long-Document Transformer,http://arxiv.org/abs/2004.05150v2,"Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.",True,True,"Beltagy, Iz and Peters, Matthew E and Cohan, Arman",2020.0,,,,arXiv preprint arXiv:2004.05150,Longformer: The Long-Document Transformer,[PDF] Longformer: The Long-Document Transformer,https://ysu1989.github.io/courses/au20/cse5539/Longformer.pdf,"Longformer: The Long-Document Transformer Beltagy et al., 2020 Presented by Leslie Zhou Background ◦Transformers: have achieved state-of-the-art results in a wide range of natural language tasks including generative language modeling and discriminative language understanding. (2019)) ◦Classification (IMDB and Hyperpartisan news detection datasets.1) Result Conclusion Longformer: a transformer-based model that is scalable for processing long documents -Easy to perform a wide range of document-level NLP tasks without chunking/shortening the long input -No complex architecture to combine information across these chunks -Combines local and global information while also scaling linearly with the sequence length -Outperforms RoBERTa on long document tasks Thanks!" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,tan2021sparse,\cite{tan2021sparse},Sparse-Interest Network for Sequential Recommendation,http://arxiv.org/abs/2102.09267v1,"Recent methods in sequential recommendation focus on learning an overall embedding vector from a user's behavior sequence for the next-item recommendation. However, from empirical analysis, we discovered that a user's behavior sequence often contains multiple conceptually distinct items, while a unified embedding vector is primarily affected by one's most recent frequent actions. Thus, it may fail to infer the next preferred item if conceptually similar items are not dominant in recent interactions. To this end, an alternative solution is to represent each user with multiple embedding vectors encoding different aspects of the user's intentions. Nevertheless, recent work on multi-interest embedding usually considers a small number of concepts discovered via clustering, which may not be comparable to the large pool of item categories in real systems. It is a non-trivial task to effectively model a large number of diverse conceptual prototypes, as items are often not conceptually well clustered in fine granularity. Besides, an individual usually interacts with only a sparse set of concepts. In light of this, we propose a novel \textbf{S}parse \textbf{I}nterest \textbf{NE}twork (SINE) for sequential recommendation. Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool and output multiple embeddings accordingly. Given multiple interest embeddings, we develop an interest aggregation module to actively predict the user's current intention and then use it to explicitly model multiple interests for next-item prediction. Empirical results on several public benchmark datasets and one large-scale industrial dataset demonstrate that SINE can achieve substantial improvement over state-of-the-art methods.",True,True,"Tan, Qiaoyu and Zhang, Jianwei and Yao, Jiangchao and Liu, Ninghao and Zhou, Jingren and Yang, Hongxia and Hu, Xia",2021.0,,,,,Sparse-Interest Network for Sequential Recommendation,Sparse-Interest Network for Sequential Recommendation,https://dl.acm.org/doi/10.1145/3437963.3441811,We propose a novel Sparse Interest NEtwork (SINE) for sequential recommendation. Our sparse-interest module can adaptively infer a sparse set of concepts for Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,fan2021lighter,\cite{fan2021lighter},Lighter and better: low-rank decomposed self-attention networks for next-item recommendation,,,True,False,"Fan, Xinyan and Liu, Zheng and Lian, Jianxun and Zhao, Wayne Xin and Xie, Xing and Wen, Ji-Rong",2021.0,,,,,Lighter and better: low-rank decomposed self-attention networks for next-item recommendation,[PDF] Low-Rank Decomposed Self-Attention Networks for Next-Item ...,https://www.microsoft.com/en-us/research/wp-content/uploads/2021/05/LighterandBetter_Low-RankDecomposedSelf-AttentionNetworksforNext-ItemRecommendation.pdf,"Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation Xinyan Fan1,2, Zheng Liu3∗, Jianxun Lian3, Wayne Xin Zhao1,2∗, Xing Xie3, and Ji-Rong Wen1,2 1Gaoling School of Artificial Intelligence, Renmin University of China 2Beijing Key Laboratory of Big Data Management and Analysis Methods 3Microsoft Research Asia {xinyan.fan, jrwen}@ruc.edu.cn, batmanfly@gmail.com, {zhengliu, jianxun.lian, xingx}@microsoft.com ABSTRACT Self-attention networks (SANs) have been intensively applied for sequential recommenders, but they are limited due to: (1) the qua-dratic complexity and vulnerability to over-parameterization in self-attention; (2) inaccurate modeling of sequential relations between items due to the implicit position encoding. Our main contributions are summarized as follows: • A novel SANs-based sequential recommender, LightSANs, with two advantages: (1) the low-rank decomposed self-attention for more efficient and precise modeling of context-aware represen-tations; (2) the decoupled position encoding for more effective modeling of sequential relations between items." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,zhu2024collaborative,\cite{zhu2024collaborative},Collaborative Large Language Model for Recommender Systems,http://arxiv.org/abs/2311.01343v4,"Recently, there has been growing interest in developing the next-generation recommender systems (RSs) based on pretrained large language models (LLMs). However, the semantic gap between natural language and recommendation tasks is still not well addressed, leading to multiple issues such as spuriously correlated user/item descriptors, ineffective language modeling on user/item data, inefficient recommendations via auto-regression, etc. In this paper, we propose CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and ID paradigm of RSs, aiming to address the above challenges simultaneously. We first extend the vocabulary of pretrained LLMs with user/item ID tokens to faithfully model user/item collaborative and content semantics. Accordingly, a novel soft+hard prompting strategy is proposed to effectively learn user/item collaborative/content token embeddings via language modeling on RS-specific corpora, where each document is split into a prompt consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and a main text consisting of homogeneous item tokens or vocab tokens to facilitate stable and effective language modeling. In addition, a novel mutual regularization strategy is introduced to encourage CLLM4Rec to capture recommendation-related information from noisy user/item content. Finally, we propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where an item prediction head with multinomial likelihood is added to the pretrained CLLM4Rec backbone to predict hold-out items based on soft+hard prompts established from masked user-item interaction history, where recommendations of multiple items can be generated efficiently without hallucination. Codes are released at https://github.com/yaochenzhu/llm4rec.",True,True,"Zhu, Yaochen and Wu, Liang and Guo, Qi and Hong, Liangjie and Li, Jundong",2024.0,,,,,Collaborative Large Language Model for Recommender Systems,Collaborative Large Language Model for Recommender Systems,http://arxiv.org/pdf/2311.01343v4,"Recently, there has been growing interest in developing the next-generation recommender systems (RSs) based on pretrained large language models (LLMs). However, the semantic gap between natural language and recommendation tasks is still not well addressed, leading to multiple issues such as spuriously correlated user/item descriptors, ineffective language modeling on user/item data, inefficient recommendations via auto-regression, etc. In this paper, we propose CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and ID paradigm of RSs, aiming to address the above challenges simultaneously. We first extend the vocabulary of pretrained LLMs with user/item ID tokens to faithfully model user/item collaborative and content semantics. Accordingly, a novel soft+hard prompting strategy is proposed to effectively learn user/item collaborative/content token embeddings via language modeling on RS-specific corpora, where each document is split into a prompt consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and a main text consisting of homogeneous item tokens or vocab tokens to facilitate stable and effective language modeling. In addition, a novel mutual regularization strategy is introduced to encourage CLLM4Rec to capture recommendation-related information from noisy user/item content. Finally, we propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where an item prediction head with multinomial likelihood is added to the pretrained CLLM4Rec backbone to predict hold-out items based on soft+hard prompts established from masked user-item interaction history, where recommendations of multiple items can be generated efficiently without hallucination. Codes are released at https://github.com/yaochenzhu/llm4rec." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,zhao2023survey,\cite{zhao2023survey},Large Language Models: A Survey,http://arxiv.org/abs/2402.06196v3,"Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, since the release of ChatGPT in November 2022. LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data, as predicted by scaling laws \cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while very recent, is evolving rapidly in many different ways. In this paper, we review some of the most prominent LLMs, including three popular LLM families (GPT, LLaMA, PaLM), and discuss their characteristics, contributions and limitations. We also give an overview of techniques developed to build, and augment LLMs. We then survey popular datasets prepared for LLM training, fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs on a set of representative benchmarks. Finally, we conclude the paper by discussing open challenges and future research directions.",True,True,"Zhao, Wayne Xin and Zhou, Kun and Li, Junyi and Tang, Tianyi and Wang, Xiaolei and Hou, Yupeng and Min, Yingqian and Zhang, Beichen and Zhang, Junjie and Dong, Zican and others",2023.0,,,,arXiv preprint arXiv:2303.18223,Large Language Models: A Survey,Large Language Models: A Survey,http://arxiv.org/pdf/2402.06196v3,"Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, since the release of ChatGPT in November 2022. LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data, as predicted by scaling laws \cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while very recent, is evolving rapidly in many different ways. In this paper, we review some of the most prominent LLMs, including three popular LLM families (GPT, LLaMA, PaLM), and discuss their characteristics, contributions and limitations. We also give an overview of techniques developed to build, and augment LLMs. We then survey popular datasets prepared for LLM training, fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs on a set of representative benchmarks. Finally, we conclude the paper by discussing open challenges and future research directions." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,wu2024survey,\cite{wu2024survey},Explainability for Large Language Models: A Survey,http://arxiv.org/abs/2309.01029v3,"Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this paper, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations, and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional machine learning models.",True,True,"Wu, Likang and Zheng, Zhi and Qiu, Zhaopeng and Wang, Hao and Gu, Hongchao and Shen, Tingjia and Qin, Chuan and Zhu, Chen and Zhu, Hengshu and Liu, Qi and others",2024.0,,,,World Wide Web,Explainability for Large Language Models: A Survey,Explainability for Large Language Models: A Survey,http://arxiv.org/pdf/2309.01029v3,"Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this paper, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations, and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional machine learning models." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,zhang2023recommendation,\cite{zhang2023recommendation},"Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach",http://arxiv.org/abs/2305.07001v1,"In the past decades, recommender systems have attracted much attention in both research and industry communities, and a large number of studies have been devoted to developing effective recommendation models. Basically speaking, these models mainly learn the underlying user preference from historical behavior data, and then estimate the user-item matching relationships for recommendations. Inspired by the recent progress on large language models (LLMs), we take a different approach to developing the recommendation models, considering recommendation as instruction following by LLMs. The key idea is that the preferences or needs of a user can be expressed in natural language descriptions (called instructions), so that LLMs can understand and further execute the instruction for fulfilling the recommendation task. Instead of using public APIs of LLMs, we instruction tune an open-source LLM (3B Flan-T5-XL), in order to better adapt LLMs to recommender systems. For this purpose, we first design a general instruction format for describing the preference, intention, task form and context of a user in natural language. Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data (252K instructions) with varying types of preferences and intentions. To demonstrate the effectiveness of our approach, we instantiate the instruction templates into several widely-studied recommendation (or search) tasks, and conduct extensive experiments on these tasks with real-world datasets. Experiment results show that the proposed approach can outperform several competitive baselines, including the powerful GPT-3.5, on these evaluation tasks. Our approach sheds light on developing more user-friendly recommender systems, in which users can freely communicate with the system and obtain more accurate recommendations via natural language instructions.",True,True,"Zhang, Junjie and Xie, Ruobing and Hou, Yupeng and Zhao, Xin and Lin, Leyu and Wen, Ji-Rong",2023.0,,,,ACM Transactions on Information Systems,"Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach",A Large Language Model Empowered Recommendation Approach,https://arxiv.org/abs/2305.07001,"View a PDF of the paper titled Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach, by Junjie Zhang and 5 other authors Inspired by the recent progress on large language models (LLMs), we take a different approach to developing the recommendation models, considering recommendation as instruction following by LLMs. The key idea is that the preferences or needs of a user can be expressed in natural language descriptions (called instructions), so that LLMs can understand and further execute the instruction for fulfilling the recommendation task. View a PDF of the paper titled Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach, by Junjie Zhang and 5 other authors - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle - [x] Core recommender toggle " Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,hou2024large,\cite{hou2024large},Large language models are zero-shot rankers for recommender systems,,,True,False,"Hou, Yupeng and Zhang, Junjie and Lin, Zihan and Lu, Hongyu and Xie, Ruobing and McAuley, Julian and Zhao, Wayne Xin",2024.0,,,,,Large language models are zero-shot rankers for recommender systems,[PDF] Large Language Models are Effective Text Rankers with Pairwise ...,https://aclanthology.org/2024.findings-naacl.97.pdf,"Large language models are zero-shot rankers for recommender systems. arXiv preprint. arXiv:2305.08845. Wenlong Huang, Pieter Abbeel, Deepak" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,cui2022m6,\cite{cui2022m6},M6-rec: Generative pretrained language models are open-ended recommender systems,,,True,False,"Cui, Zeyu and Ma, Jianxin and Zhou, Chang and Zhou, Jingren and Yang, Hongxia",2022.0,,,,arXiv preprint arXiv:2205.08084,M6-rec: Generative pretrained language models are open-ended recommender systems,M6-Rec: Generative Pretrained Language Models are Open-Ended ...,https://arxiv.org/abs/2205.08084,"View a PDF of the paper titled M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems, by Zeyu Cui and 4 other authors In this paper, we explore the possibility of developing a unified foundation model to support \emph{open-ended domains and tasks} in an industrial recommender system, which may reduce the demand on downstream settings' data and can minimize the carbon footprint by avoiding training a separate model from scratch for every task. View a PDF of the paper titled M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems, by Zeyu Cui and 4 other authors - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle - [x] Core recommender toggle " Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,geng2022recommendation,\cite{geng2022recommendation},"Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)",http://arxiv.org/abs/2203.13366v7,"For a long time, different recommendation tasks typically require designing task-specific architectures and training objectives. As a result, it is hard to transfer the learned knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches, e.g., a sequential recommendation model can hardly be applied or transferred to a review generation method. To deal with such issues, considering that language can describe almost anything and language grounding is a powerful medium to represent various problems or tasks, we present a flexible and unified text-to-text paradigm called ""Pretrain, Personalized Prompt, and Predict Paradigm"" (P5) for recommendation, which unifies various recommendation tasks in a shared framework. In P5, all data such as user-item interactions, user descriptions, item metadata, and user reviews are converted to a common format -- natural language sequences. The rich information from natural language assists P5 to capture deeper semantics for personalization and recommendation. Specifically, P5 learns different tasks with the same language modeling objective during pretraining. Thus, it serves as the foundation model for various downstream recommendation tasks, allows easy integration with other modalities, and enables instruction-based recommendation based on prompts. P5 advances recommender systems from shallow model to deep model to big model, and will revolutionize the technical form of recommender systems towards universal recommendation engine. With adaptive personalized prompt for different users, P5 is able to make predictions in a zero-shot or few-shot manner and largely reduces the necessity for extensive fine-tuning. On several recommendation benchmarks, we conduct experiments to show the effectiveness of P5. We release the source code at https://github.com/jeykigung/P5.",True,True,"Geng, Shijie and Liu, Shuchang and Fu, Zuohui and Ge, Yingqiang and Zhang, Yongfeng",2022.0,,,,,"Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)","[PDF] A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)",https://par.nsf.gov/servlets/purl/10434475,"To help advance future research on Recommendation as Language Processing (RLP), Personalized. Foundation Models (PFM), and Universal Recommendation Engine. (URE)" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,gao2013cross,\cite{gao2013cross},Cross-domain recommendation via cluster-level latent factor model,,,True,False,"Gao, Sheng and Luo, Hao and Chen, Da and Li, Shantao and Gallinari, Patrick and Guo, Jun",2013.0,,,,,Cross-domain recommendation via cluster-level latent factor model,Cross-Domain Recommendation via Cluster-Level Latent Factor ...,https://link.springer.com/chapter/10.1007/978-3-642-40991-2_11,"In this paper, we propose a novel cluster-level based latent factor model to enhance the cross-domain recommendation, which can not only learn the common rating" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,singh2008relational,\cite{singh2008relational},Relational learning via collective matrix factorization,,,True,False,"Singh, Ajit P and Gordon, Geoffrey J",2008.0,,,,,Relational learning via collective matrix factorization,[PDF] Relational Learning via Collective Matrix Factorization,https://www.cs.cmu.edu/~ggordon/CMU-ML-08-109.pdf,"Relational learning is concerned with predicting unknown values of a relation, given a database of entities and observed relations among entities." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,liu2020cross,\cite{liu2020cross},Cross domain recommendation via bi-directional transfer graph collaborative filtering networks,,,True,False,"Liu, Meng and Li, Jianjun and Li, Guohui and Pan, Peng",2020.0,,,,,Cross domain recommendation via bi-directional transfer graph collaborative filtering networks,sunshinelium/Bi-TGCF: Cross Domain Recommendation ... - GitHub,https://github.com/sunshinelium/Bi-TGCF,Tensorflow Implementation of BiTGCF: Cross Domain Recommendation via Bi-directional Transfer Graph Collaborative Filtering Networks. Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,zhu2019dtcdr,\cite{zhu2019dtcdr},Dtcdr: A framework for dual-target cross-domain recommendation,,,True,False,"Zhu, Feng and Chen, Chaochao and Wang, Yan and Liu, Guanfeng and Zheng, Xiaolin",2019.0,,,,,Dtcdr: A framework for dual-target cross-domain recommendation,(PDF) DTCDR: A Framework for Dual-Target Cross- ...,https://www.researchgate.net/publication/337018321_DTCDR_A_Framework_for_Dual-Target_Cross-Domain_Recommendation,"In order to address the data sparsity problem in recommender systems, in recent years, Cross-Domain Recommendation (CDR) leverages the relatively richer information from a source domain to improve the recommendation performance on a target domain with sparser information. However, each of the two domains may be relatively richer in certain types of information (e.g., ratings, reviews, user profiles, item details, and tags), and thus, if we can leverage such information well, it is possible to improve the recommendation performance on both domains simultaneously (i.e., dual-target CDR), rather than a single target domain only. Then, based on Multi-Task Learning (MTL), we design an adaptable embedding-sharing strategy to combine and share the embeddings of common users across domains, with which DTCDR can improve the recommendation performance on both richer and sparser (i.e., dual-target) domains simultaneously." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,li2023one,\cite{li2023one},"One for all, all for one: Learning and transferring user embeddings for cross-domain recommendation",,,True,False,"Li, Chenglin and Xie, Yuanzhen and Yu, Chenyun and Hu, Bo and Li, Zang and Shu, Guoqiang and Qie, Xiaohu and Niu, Di",2023.0,,,,,"One for all, all for one: Learning and transferring user embeddings for cross-domain recommendation",Learning and Transferring User Embeddings for Cross ...,https://arxiv.org/abs/2211.11964,"[2211.11964] One for All, All for One: Learning and Transferring User Embeddings for Cross-Domain Recommendation View a PDF of the paper titled One for All, All for One: Learning and Transferring User Embeddings for Cross-Domain Recommendation, by Chenglin Li and 7 other authors In this study, we propose CAT-ART, a multi-target CDR method that learns to improve recommendations in all participating domains through representation learning and embedding transfer. View a PDF of the paper titled One for All, All for One: Learning and Transferring User Embeddings for Cross-Domain Recommendation, by Chenglin Li and 7 other authors - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle - [x] Core recommender toggle " Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,cao2022contrastive,\cite{cao2022contrastive},Contrastive Cross-Domain Sequential Recommendation,http://arxiv.org/abs/2304.03891v1,"Cross-Domain Sequential Recommendation (CDSR) aims to predict future interactions based on user's historical sequential interactions from multiple domains. Generally, a key challenge of CDSR is how to mine precise cross-domain user preference based on the intra-sequence and inter-sequence item interactions. Existing works first learn single-domain user preference only with intra-sequence item interactions, and then build a transferring module to obtain cross-domain user preference. However, such a pipeline and implicit solution can be severely limited by the bottleneck of the designed transferring module, and ignores to consider inter-sequence item relationships. In this paper, we propose C^2DSR to tackle the above problems to capture precise user preferences. The main idea is to simultaneously leverage the intra- and inter- sequence item relationships, and jointly learn the single- and cross- domain user preferences. Specifically, we first utilize a graph neural network to mine inter-sequence item collaborative relationship, and then exploit sequential attentive encoder to capture intra-sequence item sequential relationship. Based on them, we devise two different sequential training objectives to obtain user single-domain and cross-domain representations. Furthermore, we present a novel contrastive cross-domain infomax objective to enhance the correlation between single- and cross- domain user representations by maximizing their mutual information. To validate the effectiveness of C^2DSR, we first re-split four e-comerce datasets, and then conduct extensive experiments to demonstrate the effectiveness of our approach C^2DSR.",True,True,"Cao, Jiangxia and Cong, Xin and Sheng, Jiawei and Liu, Tingwen and Wang, Bin",2022.0,,,,,Contrastive Cross-Domain Sequential Recommendation,Contrastive Cross-Domain Sequential Recommendation,https://dl.acm.org/doi/10.1145/3511808.3557262,Cross-Domain Sequential Recommendation (CDSR) aims to predict future interactions based on user's historical sequential interactions from multiple domains. Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,park2024pacer,\cite{park2024pacer},"Pacer and Runner: Cooperative Learning Framework between Single- and Cross-Domain Sequential Recommendation",http://arxiv.org/abs/2407.11245v2,"Cross-Domain Sequential Recommendation (CDSR) improves recommendation performance by utilizing information from multiple domains, which contrasts with Single-Domain Sequential Recommendation (SDSR) that relies on a historical interaction within a specific domain. However, CDSR may underperform compared to the SDSR approach in certain domains due to negative transfer, which occurs when there is a lack of relation between domains or different levels of data sparsity. To address the issue of negative transfer, our proposed CDSR model estimates the degree of negative transfer of each domain and adaptively assigns it as a weight factor to the prediction loss, to control gradient flows through domains with significant negative transfer. To this end, our model compares the performance of a model trained on multiple domains (CDSR) with a model trained solely on the specific domain (SDSR) to evaluate the negative transfer of each domain using our asymmetric cooperative network. In addition, to facilitate the transfer of valuable cues between the SDSR and CDSR tasks, we developed an auxiliary loss that maximizes the mutual information between the representation pairs from both tasks on a per-domain basis. This cooperative learning between SDSR and CDSR tasks is similar to the collaborative dynamics between pacers and runners in a marathon. Our model outperformed numerous previous works in extensive experiments on two real-world industrial datasets across ten service domains. We also have deployed our model in the recommendation system of our personal assistant app service, resulting in 21.4% increase in click-through rate compared to existing models, which is valuable to real-world business.",True,True,"Park, Chung and Kim, Taesan and Yoon, Hyungjun and Hong, Junui and Yu, Yelim and Cho, Mincheol and Choi, Minsung and Choo, Jaegul",2024.0,,,,,"Pacer and Runner: Cooperative Learning Framework between Single- and Cross-Domain Sequential Recommendation",Pacer and Runner: Cooperative Learning Framework between Single,https://www.researchgate.net/publication/382302484_Pacer_and_Runner_Cooperative_Learning_Framework_between_Single-_and_Cross-Domain_Sequential_Recommendation,"Cross-Domain Sequential Recommendation (CDSR) improves recommendation performance by utilizing information from multiple domains, which contrasts with" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,hwang2024multi,\cite{hwang2024multi},Multi-Domain Sequential Recommendation via Domain Space Learning,,,True,False,"Hwang, Junyoung and Ju, Hyunjun and Kang, SeongKu and Jang, Sanghwan and Yu, Hwanjo",2024.0,,,,,Multi-Domain Sequential Recommendation via Domain Space Learning,Multi-Domain Sequential Recommendation via Domain Space ...,https://dl.acm.org/doi/10.1145/3626772.3657685,"Multi-Domain Sequential Recommendation via Domain Space Learning | Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval * Hotjar 3Learn more about this providerImage 8**_hjSession_#**Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.**Maximum Storage Duration**: 1 day**Type**: HTTP Cookie **_hjSessionUser_#**Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.**Maximum Storage Duration**: 1 year**Type**: HTTP Cookie **_hjTLDTest**Registers statistical data on users' behaviour on the website. * ### A Multi-view Graph Contrastive Learning Framework for Cross-Domain Sequential RecommendationRecSys '23: Proceedings of the 17th ACM Conference on Recommender Systems Sequential recommendation methods play an irreplaceable role in recommender systems which can capture the users’ dynamic preferences from the behavior sequences." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,zhang2024mdmtrec,\cite{zhang2024mdmtrec},MDMTRec: An Adaptive Multi-Task Multi-Domain Recommendation Framework,,,True,False,"Zhang, Zijian and Liu, Shuchang and Yu, Jiaao and Cai, Qingpeng and Zhao, Xiangyu and Zhang, Chunxu and Liu, Ziru and Liu, Qidong and Zhao, Hongwei and Hu, Lantao and others",2024.0,,,,,MDMTRec: An Adaptive Multi-Task Multi-Domain Recommendation Framework,MDMTRec: An Adaptive Multi-Task Multi-Domain ...,https://scholars.cityu.edu.hk/en/publications/mdmtrec(6cd7f151-faf5-4033-b53a-bc740739c7d6).html,"MDMTRec: An Adaptive Multi-Task Multi-Domain Recommendation Framework ; Title of host publication, 47th International ACM SIGIR Conference on" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,ma2019pi,\cite{ma2019pi},$\pi$-net: A parallel information-sharing network for shared-account cross-domain sequential recommendations,,,True,False,"Ma, Muyang and Ren, Pengjie and Lin, Yujie and Chen, Zhumin and Ma, Jun and Rijke, Maarten de",2019.0,,,,,$\pi$-net: A parallel information-sharing network for shared-account cross-domain sequential recommendations,[PDF] π-Net: A Parallel Information-sharing Network for ...,https://www.semanticscholar.org/paper/%CF%80-Net%3A-A-Parallel-Information-sharing-Network-for-Ma-Ren/fa990aee9a8f157b5d393f5f3eaa014e1e5c67aa,A Parallel Information-sharing Network (π-Net) is proposed to simultaneously generate recommendations for two domains where user behaviors Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,ma2022mixed,\cite{ma2022mixed},Mixed Information Flow for Cross-domain Sequential Recommendations,http://arxiv.org/abs/2012.00485v3,"Cross-domain sequential recommendation is the task of predict the next item that the user is most likely to interact with based on past sequential behavior from multiple domains. One of the key challenges in cross-domain sequential recommendation is to grasp and transfer the flow of information from multiple domains so as to promote recommendations in all domains. Previous studies have investigated the flow of behavioral information by exploring the connection between items from different domains. The flow of knowledge (i.e., the connection between knowledge from different domains) has so far been neglected. In this paper, we propose a mixed information flow network for cross-domain sequential recommendation to consider both the flow of behavioral information and the flow of knowledge by incorporating a behavior transfer unit and a knowledge transfer unit. The proposed mixed information flow network is able to decide when cross-domain information should be used and, if so, which cross-domain information should be used to enrich the sequence representation according to users' current preferences. Extensive experiments conducted on four e-commerce datasets demonstrate that mixed information flow network is able to further improve recommendation performance in different domains by modeling mixed information flow.",True,True,"Ma, Muyang and Ren, Pengjie and Chen, Zhumin and Ren, Zhaochun and Zhao, Lifan and Liu, Peiyu and Ma, Jun and de Rijke, Maarten",2022.0,,,,ACM Transactions on Knowledge Discovery from Data (TKDD),Mixed Information Flow for Cross-domain Sequential Recommendations,Mixed Information Flow for Cross-Domain Sequential ...,https://dl.acm.org/doi/10.1145/3487331,"In this article, we propose a mixed information flow network for cross-domain sequential recommendation to consider both the flow of behavioral information and" Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,lin2024mixed,\cite{lin2024mixed},Mixed Attention Network for Cross-domain Sequential Recommendation,http://arxiv.org/abs/2311.08272v1,"In modern recommender systems, sequential recommendation leverages chronological user behaviors to make effective next-item suggestions, which suffers from data sparsity issues, especially for new users. One promising line of work is the cross-domain recommendation, which trains models with data across multiple domains to improve the performance in data-scarce domains. Recent proposed cross-domain sequential recommendation models such as PiNet and DASL have a common drawback relying heavily on overlapped users in different domains, which limits their usage in practical recommender systems. In this paper, we propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information. Firstly, we propose a local/global encoding layer to capture the domain-specific/cross-domain sequential pattern. Then we propose a mixed attention layer with item similarity attention, sequence-fusion attention, and group-prototype attention to capture the local/global item similarity, fuse the local/global item sequence, and extract the user groups across different domains, respectively. Finally, we propose a local/global prediction layer to further evolve and combine the domain-specific and cross-domain interests. Experimental results on two real-world datasets (each with two domains) demonstrate the superiority of our proposed model. Further study also illustrates that our proposed method and components are model-agnostic and effective, respectively. The code and data are available at https://github.com/Guanyu-Lin/MAN.",True,True,"Lin, Guanyu and Gao, Chen and Zheng, Yu and Chang, Jianxin and Niu, Yanan and Song, Yang and Gai, Kun and Li, Zhiheng and Jin, Depeng and Li, Yong and others",2024.0,,,,,Mixed Attention Network for Cross-domain Sequential Recommendation,[2311.08272] Mixed Attention Network for Cross-domain Sequential ...,https://arxiv.org/abs/2311.08272,"In this paper, we propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,nagrani2021attention,\cite{nagrani2021attention},Attention Bottlenecks for Multimodal Fusion,http://arxiv.org/abs/2107.00135v3,"Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.",True,True,"Nagrani, Arsha and Yang, Shan and Arnab, Anurag and Jansen, Aren and Schmid, Cordelia and Sun, Chen",2021.0,,,,Advances in neural information processing systems,Attention Bottlenecks for Multimodal Fusion,Attention Bottlenecks for Multimodal Fusion,http://arxiv.org/pdf/2107.00135v3,"Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,tsai2019multimodal,\cite{tsai2019multimodal},Multimodal Transformer for Unaligned Multimodal Language Sequences,http://arxiv.org/abs/1906.00295v1,"Human language is often multimodal, which comprehends a mixture of natural language, facial gestures, and acoustic behaviors. However, two major challenges in modeling such multimodal human language time-series data exist: 1) inherent data non-alignment due to variable sampling rates for the sequences from each modality; and 2) long-range dependencies between elements across modalities. In this paper, we introduce the Multimodal Transformer (MulT) to generically address the above issues in an end-to-end manner without explicitly aligning the data. At the heart of our model is the directional pairwise crossmodal attention, which attends to interactions between multimodal sequences across distinct time steps and latently adapt streams from one modality to another. Comprehensive experiments on both aligned and non-aligned multimodal time-series show that our model outperforms state-of-the-art methods by a large margin. In addition, empirical analysis suggests that correlated crossmodal signals are able to be captured by the proposed crossmodal attention mechanism in MulT.",True,True,"Tsai, Yao-Hung Hubert and Bai, Shaojie and Liang, Paul Pu and Kolter, J Zico and Morency, Louis-Philippe and Salakhutdinov, Ruslan",2019.0,,,,,Multimodal Transformer for Unaligned Multimodal Language Sequences,Multimodal Transformer for Unaligned Multimodal Language Sequences,http://arxiv.org/pdf/1906.00295v1,"Human language is often multimodal, which comprehends a mixture of natural language, facial gestures, and acoustic behaviors. However, two major challenges in modeling such multimodal human language time-series data exist: 1) inherent data non-alignment due to variable sampling rates for the sequences from each modality; and 2) long-range dependencies between elements across modalities. In this paper, we introduce the Multimodal Transformer (MulT) to generically address the above issues in an end-to-end manner without explicitly aligning the data. At the heart of our model is the directional pairwise crossmodal attention, which attends to interactions between multimodal sequences across distinct time steps and latently adapt streams from one modality to another. Comprehensive experiments on both aligned and non-aligned multimodal time-series show that our model outperforms state-of-the-art methods by a large margin. In addition, empirical analysis suggests that correlated crossmodal signals are able to be captured by the proposed crossmodal attention mechanism in MulT." Revisiting Self-attention for Cross-domain Sequential Recommendation,2505.21811v1,xu2023multimodal,\cite{xu2023multimodal},Multimodal Learning with Transformers: A Survey,http://arxiv.org/abs/2206.06488v2,"Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks. Thanks to the recent prevalence of multimodal applications and big data, Transformer-based multimodal learning has become a hot topic in AI research. This paper presents a comprehensive survey of Transformer techniques oriented at multimodal data. The main contents of this survey include: (1) a background of multimodal learning, Transformer ecosystem, and the multimodal big data era, (2) a theoretical review of Vanilla Transformer, Vision Transformer, and multimodal Transformers, from a geometrically topological perspective, (3) a review of multimodal Transformer applications, via two important paradigms, i.e., for multimodal pretraining and for specific multimodal tasks, (4) a summary of the common challenges and designs shared by the multimodal Transformer models and applications, and (5) a discussion of open problems and potential research directions for the community.",True,True,"Xu, Peng and Zhu, Xiatian and Clifton, David A",2023.0,,,,IEEE Transactions on Pattern Analysis and Machine Intelligence,Multimodal Learning with Transformers: A Survey,Multimodal Learning with Transformers: A Survey,http://arxiv.org/pdf/2206.06488v2,"Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks. Thanks to the recent prevalence of multimodal applications and big data, Transformer-based multimodal learning has become a hot topic in AI research. This paper presents a comprehensive survey of Transformer techniques oriented at multimodal data. The main contents of this survey include: (1) a background of multimodal learning, Transformer ecosystem, and the multimodal big data era, (2) a theoretical review of Vanilla Transformer, Vision Transformer, and multimodal Transformers, from a geometrically topological perspective, (3) a review of multimodal Transformer applications, via two important paradigms, i.e., for multimodal pretraining and for specific multimodal tasks, (4) a summary of the common challenges and designs shared by the multimodal Transformer models and applications, and (5) a discussion of open problems and potential research directions for the community." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,gu2021self,\cite{gu2021self},Self-supervised learning on users' spontaneous behaviors for multi-scenario ranking in e-commerce,,,True,False,"Gu, Yulong and Bao, Wentian and Ou, Dan and Li, Xiang and Cui, Baoliang and Ma, Biyu and Huang, Haikuan and Liu, Qingwen and Zeng, Xiaoyi",2021.0,,,,,Self-supervised learning on users' spontaneous behaviors for multi-scenario ranking in e-commerce,(PDF) Self-Supervised Learning on Users' Spontaneous ...,https://www.researchgate.net/publication/356247829_Self-Supervised_Learning_on_Users'_Spontaneous_Behaviors_for_Multi-Scenario_Ranking_in_E-commerce,Self-Supervised Learning on Users' Spontaneous Behaviors for Multi-Scenario Ranking in E-commerce ; Scenario 1: Homepage Scenario 2: Search Discovery Scenario 3: "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,lqd1,\cite{lqd1},Diffusion Augmentation for Sequential Recommendation,http://arxiv.org/abs/2309.12858v1,"Sequential recommendation (SRS) has become the technical foundation in many applications recently, which aims to recommend the next item based on the user's historical interactions. However, sequential recommendation often faces the problem of data sparsity, which widely exists in recommender systems. Besides, most users only interact with a few items, but existing SRS models often underperform these users. Such a problem, named the long-tail user problem, is still to be resolved. Data augmentation is a distinct way to alleviate these two problems, but they often need fabricated training strategies or are hindered by poor-quality generated interactions. To address these problems, we propose a Diffusion Augmentation for Sequential Recommendation (DiffuASR) for a higher quality generation. The augmented dataset by DiffuASR can be used to train the sequential recommendation models directly, free from complex training procedures. To make the best of the generation ability of the diffusion model, we first propose a diffusion-based pseudo sequence generation framework to fill the gap between image and sequence generation. Then, a sequential U-Net is designed to adapt the diffusion noise prediction model U-Net to the discrete sequence generation task. At last, we develop two guide strategies to assimilate the preference between generated and origin sequences. To validate the proposed DiffuASR, we conduct extensive experiments on three real-world datasets with three sequential recommendation models. The experimental results illustrate the effectiveness of DiffuASR. As far as we know, DiffuASR is one pioneer that introduce the diffusion model to the recommendation.",True,True,"Liu, Qidong and Yan, Fan and Zhao, Xiangyu and Du, Zhaocheng and Guo, Huifeng and Tang, Ruiming and Tian, Feng",2023.0,,,,,Diffusion Augmentation for Sequential Recommendation,Diffusion Augmentation for Sequential Recommendation,http://arxiv.org/pdf/2309.12858v1,"Sequential recommendation (SRS) has become the technical foundation in many applications recently, which aims to recommend the next item based on the user's historical interactions. However, sequential recommendation often faces the problem of data sparsity, which widely exists in recommender systems. Besides, most users only interact with a few items, but existing SRS models often underperform these users. Such a problem, named the long-tail user problem, is still to be resolved. Data augmentation is a distinct way to alleviate these two problems, but they often need fabricated training strategies or are hindered by poor-quality generated interactions. To address these problems, we propose a Diffusion Augmentation for Sequential Recommendation (DiffuASR) for a higher quality generation. The augmented dataset by DiffuASR can be used to train the sequential recommendation models directly, free from complex training procedures. To make the best of the generation ability of the diffusion model, we first propose a diffusion-based pseudo sequence generation framework to fill the gap between image and sequence generation. Then, a sequential U-Net is designed to adapt the diffusion noise prediction model U-Net to the discrete sequence generation task. At last, we develop two guide strategies to assimilate the preference between generated and origin sequences. To validate the proposed DiffuASR, we conduct extensive experiments on three real-world datasets with three sequential recommendation models. The experimental results illustrate the effectiveness of DiffuASR. As far as we know, DiffuASR is one pioneer that introduce the diffusion model to the recommendation." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,lqd2,\cite{lqd2},Llm-esr: Large language models enhancement for long-tailed sequential recommendation,,,True,False,"Liu, Qidong and Wu, Xian and Wang, Yejing and Zhang, Zijian and Tian, Feng and Zheng, Yefeng and Zhao, Xiangyu",2024.0,,,,Advances in Neural Information Processing Systems,Llm-esr: Large language models enhancement for long-tailed sequential recommendation,[PDF] LLM-ESR: Large Language Models Enhancement for Long-tailed ...,https://proceedings.neurips.cc/paper_files/paper/2024/file/2f0728449cb3150189d765fc87afc913-Paper-Conference.pdf,"Firstly, we derive the semantic embeddings of items and users by encoding prompt texts from LLMs. Since these embeddings can be cached in advance, our integration does not impose any extra inference burden from LLMs. To tackle the long-tail item challenge, we devise a dual-view modeling framework that combines semantic and collaborative information. historical interactions Retrieve Dual-view Modeling Retrieval Augmented Self-Distillation Semantic User Base Frozen Updated Item Semantic Embedding Collaborative Embedding LLMs User Embedding Figure 2: The overview of the proposed LLM-ESR framework. The contributions of this paper are as follows: • We propose a large language models enhancement framework, which can alleviate both long-tail user and item challenges for SRS by introducing semantic information from LLMs." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,hyp1,\cite{hyp1},"ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation",http://arxiv.org/abs/2502.13581v2,"Generative recommendation (GR) is an emerging paradigm where user actions are tokenized into discrete token patterns and autoregressively generated as predictions. However, existing GR models tokenize each action independently, assigning the same fixed tokens to identical actions across all sequences without considering contextual relationships. This lack of context-awareness can lead to suboptimal performance, as the same action may hold different meanings depending on its surrounding context. To address this issue, we propose ActionPiece to explicitly incorporate context when tokenizing action sequences. In ActionPiece, each action is represented as a set of item features. Given the action sequence corpora, we construct the vocabulary by merging feature patterns as new tokens, based on their co-occurrence frequency both within individual sets and across adjacent sets. Considering the unordered nature of feature sets, we further introduce set permutation regularization, which produces multiple segmentations of action sequences with the same semantics. Our code is available at: https://github.com/google-deepmind/action_piece.",True,True,Yupeng Hou and Jianmo Ni and Zhankui He and Noveen Sachdeva and Wang-Cheng Kang and Ed H. Chi and Julian McAuley and Derek Zhiyuan Cheng,2025.0,,,,,"ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation",ActionPiece: Contextually Tokenizing Action Sequences for ...,https://arxiv.org/pdf/2502.13581?,by Y Hou · 2025 · Cited by 1 — Generative recommendation (GR) is an emerg- ing paradigm where user actions are tokenized into discrete token patterns and autoregressively. "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,hyp2,\cite{hyp2},Generating Long Semantic IDs in Parallel for Recommendation,http://arxiv.org/abs/2506.05781v1,"Semantic ID-based recommendation models tokenize each item into a small number of discrete tokens that preserve specific semantics, leading to better performance, scalability, and memory efficiency. While recent models adopt a generative approach, they often suffer from inefficient inference due to the reliance on resource-intensive beam search and multiple forward passes through the neural sequence model. As a result, the length of semantic IDs is typically restricted (e.g. to just 4 tokens), limiting their expressiveness. To address these challenges, we propose RPG, a lightweight framework for semantic ID-based recommendation. The key idea is to produce unordered, long semantic IDs, allowing the model to predict all tokens in parallel. We train the model to predict each token independently using a multi-token prediction loss, directly integrating semantics into the learning objective. During inference, we construct a graph connecting similar semantic IDs and guide decoding to avoid generating invalid IDs. Experiments show that scaling up semantic ID length to 64 enables RPG to outperform generative baselines by an average of 12.6% on the NDCG@10, while also improving inference efficiency. Code is available at: https://github.com/facebookresearch/RPG_KDD2025.",True,True,Yupeng Hou and Jiacheng Li and Ashley Shin and Jinsung Jeon and Abhishek Santhanam and Wei Shao and Kaveh Hassani and Ning Yao and Julian McAuley,2025.0,,,,,Generating Long Semantic IDs in Parallel for Recommendation,Generating Long Semantic IDs in Parallel for Recommendation,http://arxiv.org/pdf/2506.05781v1,"Semantic ID-based recommendation models tokenize each item into a small number of discrete tokens that preserve specific semantics, leading to better performance, scalability, and memory efficiency. While recent models adopt a generative approach, they often suffer from inefficient inference due to the reliance on resource-intensive beam search and multiple forward passes through the neural sequence model. As a result, the length of semantic IDs is typically restricted (e.g. to just 4 tokens), limiting their expressiveness. To address these challenges, we propose RPG, a lightweight framework for semantic ID-based recommendation. The key idea is to produce unordered, long semantic IDs, allowing the model to predict all tokens in parallel. We train the model to predict each token independently using a multi-token prediction loss, directly integrating semantics into the learning objective. During inference, we construct a graph connecting similar semantic IDs and guide decoding to avoid generating invalid IDs. Experiments show that scaling up semantic ID length to 64 enables RPG to outperform generative baselines by an average of 12.6% on the NDCG@10, while also improving inference efficiency. Code is available at: https://github.com/facebookresearch/RPG_KDD2025." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,liuyue_Rec1,\cite{liuyue_Rec1},End-to-end Learnable Clustering for Intent Learning in Recommendation,http://arxiv.org/abs/2401.05975v5,"Intent learning, which aims to learn users' intents for user understanding and item recommendation, has become a hot research spot in recent years. However, existing methods suffer from complex and cumbersome alternating optimization, limiting performance and scalability. To this end, we propose a novel intent learning method termed \underline{ELCRec}, by unifying behavior representation learning into an \underline{E}nd-to-end \underline{L}earnable \underline{C}lustering framework, for effective and efficient \underline{Rec}ommendation. Concretely, we encode user behavior sequences and initialize the cluster centers (latent intents) as learnable neurons. Then, we design a novel learnable clustering module to separate different cluster centers, thus decoupling users' complex intents. Meanwhile, it guides the network to learn intents from behaviors by forcing behavior embeddings close to cluster centers. This allows simultaneous optimization of recommendation and clustering via mini-batch data. Moreover, we propose intent-assisted contrastive learning by using cluster centers as self-supervision signals, further enhancing mutual promotion. Both experimental results and theoretical analyses demonstrate the superiority of ELCRec from six perspectives. Compared to the runner-up, ELCRec improves NDCG@5 by 8.9\% and reduces computational costs by 22.5\% on the Beauty dataset. Furthermore, due to the scalability and universal applicability, we deploy this method on the industrial recommendation system with 130 million page views and achieve promising results. The codes are available on GitHub (https://github.com/yueliu1999/ELCRec). A collection (papers, codes, datasets) of deep group recommendation/intent learning methods is available on GitHub (https://github.com/yueliu1999/Awesome-Deep-Group-Recommendation).",True,True,"Liu, Yue and Zhu, Shihao and Xia, Jun and Ma, Yingwei and Ma, Jian and Zhong, Wenliang and Liu, Xinwang and Yu, Shengju and Zhang, Kejun",2024.0,,,,,End-to-end Learnable Clustering for Intent Learning in Recommendation,[PDF] End-to-end Learnable Clustering for Intent Learning in ... - NIPS,https://proceedings.neurips.cc/paper_files/paper/2024/file/0b5669c3b07bb8429af19a7919376ff5-Paper-Conference.pdf,"To this end, we propose a novel intent learning method termed ELCRec, by unifying behavior representation learning into an End-to-end Learnable Clustering framework, for effective and efficient Recommendation. To be specific, at the E-step, clustering algorithms are adopted to learn the latent intents from users’ behavior embeddings. Meanwhile, it makes the behavior embeddings close to cluster centers to guide the models to learn more accurate intents from users’ behaviors. • We innovatively promote the existing optimization framework of intent learning by unifying behavior representation learning and clustering optimization. • A new intent learning model termed ELCRec is proposed with a simple yet effective learnable cluster module and intent-assisted contrastive learning. 3.4.2 End-to-end Learnable Cluster Module After behavior encoding, we guide the model to learn the users’ latent intents from the behavior embeddings." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,liuyue_rec2,\cite{liuyue_rec2},Identify Then Recommend: Towards Unsupervised Group Recommendation,http://arxiv.org/abs/2410.23757v1,"Group Recommendation (GR), which aims to recommend items to groups of users, has become a promising and practical direction for recommendation systems. This paper points out two issues of the state-of-the-art GR models. (1) The pre-defined and fixed number of user groups is inadequate for real-time industrial recommendation systems, where the group distribution can shift dynamically. (2) The training schema of existing GR methods is supervised, necessitating expensive user-group and group-item labels, leading to significant annotation costs. To this end, we present a novel unsupervised group recommendation framework named \underline{I}dentify \underline{T}hen \underline{R}ecommend (\underline{ITR}), where it first identifies the user groups in an unsupervised manner even without the pre-defined number of groups, and then two pre-text tasks are designed to conduct self-supervised group recommendation. Concretely, at the group identification stage, we first estimate the adaptive density of each user point, where areas with higher densities are more likely to be recognized as group centers. Then, a heuristic merge-and-split strategy is designed to discover the user groups and decision boundaries. Subsequently, at the self-supervised learning stage, the pull-and-repulsion pre-text task is proposed to optimize the user-group distribution. Besides, the pseudo group recommendation pre-text task is designed to assist the recommendations. Extensive experiments demonstrate the superiority and effectiveness of ITR on both user recommendation (e.g., 22.22\% NDCG@5 $\uparrow$) and group recommendation (e.g., 22.95\% NDCG@5 $\uparrow$). Furthermore, we deploy ITR on the industrial recommender and achieve promising results.",True,True,"Liu, Yue and Zhu, Shihao and Yang, Tianyuan and Ma, Jian and Zhong, Wenliang",2024.0,,,,,Identify Then Recommend: Towards Unsupervised Group Recommendation,Identify Then Recommend: Towards Unsupervised Group Recommendation,http://arxiv.org/pdf/2410.23757v1,"Group Recommendation (GR), which aims to recommend items to groups of users, has become a promising and practical direction for recommendation systems. This paper points out two issues of the state-of-the-art GR models. (1) The pre-defined and fixed number of user groups is inadequate for real-time industrial recommendation systems, where the group distribution can shift dynamically. (2) The training schema of existing GR methods is supervised, necessitating expensive user-group and group-item labels, leading to significant annotation costs. To this end, we present a novel unsupervised group recommendation framework named \underline{I}dentify \underline{T}hen \underline{R}ecommend (\underline{ITR}), where it first identifies the user groups in an unsupervised manner even without the pre-defined number of groups, and then two pre-text tasks are designed to conduct self-supervised group recommendation. Concretely, at the group identification stage, we first estimate the adaptive density of each user point, where areas with higher densities are more likely to be recognized as group centers. Then, a heuristic merge-and-split strategy is designed to discover the user groups and decision boundaries. Subsequently, at the self-supervised learning stage, the pull-and-repulsion pre-text task is proposed to optimize the user-group distribution. Besides, the pseudo group recommendation pre-text task is designed to assist the recommendations. Extensive experiments demonstrate the superiority and effectiveness of ITR on both user recommendation (e.g., 22.22\% NDCG@5 $\uparrow$) and group recommendation (e.g., 22.95\% NDCG@5 $\uparrow$). Furthermore, we deploy ITR on the industrial recommender and achieve promising results." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,wang2019ngcf-graph-rec,\cite{wang2019ngcf-graph-rec},Neural Graph Collaborative Filtering,http://arxiv.org/abs/1905.08108v2,"Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering.",True,True,"Wang, Xiang and He, Xiangnan and Wang, Meng and Feng, Fuli and Chua, Tat-Seng",2019.0,,,,,Neural Graph Collaborative Filtering,Neural Graph Collaborative Filtering,http://arxiv.org/pdf/1905.08108v2,"Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,luo2023mamdr-multi-domain-rec,\cite{luo2023mamdr-multi-domain-rec},MAMDR: A model agnostic learning framework for multi-domain recommendation,,,True,False,"Luo, Linhao and Li, Yumeng and Gao, Buyu and Tang, Shuai and Wang, Sinan and Li, Jiancheng and Zhu, Tanchao and Liu, Jiancai and Li, Zhao and Pan, Shirui",2023.0,,,,,MAMDR: A model agnostic learning framework for multi-domain recommendation,MAMDR: A Model Agnostic Learning Framework for Multi ...,https://www.computer.org/csdl/proceedings-article/icde/2023/222700d079/1PBylOZcdi0,"MAMDR: A Model Agnostic Learning Framework for Multi-Domain Recommendation 2023 IEEE 39th International Conference on Data Engineering (ICDE) MAMDR: A Model Agnostic Learning Framework for Multi-Domain Recommendation To address the problems of MDR methods, we propose a novel model agnostic learning framework, namely MAMDR, for the multi-domain recommendation. We integrate these components into a unified framework and present MAMDR, which can be applied to any model structure to perform multi-domain recommendation. * TLRec:Transfer Learning for Cross-Domain Recommendation 2023 IEEE 39th International Conference on Data Engineering (ICDE) 2023 IEEE 39th International Conference on Data Engineering (ICDE) 2023 IEEE International Conference on Data Mining (ICDM) 2021 IEEE International Conference on Services Computing (SCC) 2024 IEEE International Conference on Big Data (BigData) Image 10: IEEE Computer Society logo" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,chen2021user-cross-domain-rec,\cite{chen2021user-cross-domain-rec},User-specific Adaptive Fine-tuning for Cross-domain Recommendations,http://arxiv.org/abs/2106.07864v2,"Making accurate recommendations for cold-start users has been a longstanding and critical challenge for recommender systems (RS). Cross-domain recommendations (CDR) offer a solution to tackle such a cold-start problem when there is no sufficient data for the users who have rarely used the system. An effective approach in CDR is to leverage the knowledge (e.g., user representations) learned from a related but different domain and transfer it to the target domain. Fine-tuning works as an effective transfer learning technique for this objective, which adapts the parameters of a pre-trained model from the source domain to the target domain. However, current methods are mainly based on the global fine-tuning strategy: the decision of which layers of the pre-trained model to freeze or fine-tune is taken for all users in the target domain. In this paper, we argue that users in RS are personalized and should have their own fine-tuning policies for better preference transfer learning. As such, we propose a novel User-specific Adaptive Fine-tuning method (UAF), selecting which layers of the pre-trained network to fine-tune, on a per-user basis. Specifically, we devise a policy network with three alternative strategies to automatically decide which layers to be fine-tuned and which layers to have their parameters frozen for each user. Extensive experiments show that the proposed UAF exhibits significantly better and more robust performance for user cold-start recommendation.",True,True,"Chen, Lei and Yuan, Fajie and Yang, Jiaxi and He, Xiangnan and Li, Chengming and Yang, Min",2021.0,,,,IEEE Transactions on Knowledge and Data Engineering,User-specific Adaptive Fine-tuning for Cross-domain Recommendations,User-Specific Adaptive Fine-Tuning for Cross-Domain ... - IEEE Xplore,https://ieeexplore.ieee.org/iel7/69/10036334/09573392.pdf,"User-specific adaptive fine-tuning (UAF) selects which layers of a pre-trained network to fine-tune on a per-user basis, unlike global fine-tuning." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,chang2023pepnet-multi-domain-multi-task-rec,\cite{chang2023pepnet-multi-domain-multi-task-rec},"PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information",http://arxiv.org/abs/2302.01115v3,"With the increase of content pages and interactive buttons in online services such as online-shopping and video-watching websites, industrial-scale recommender systems face challenges in multi-domain and multi-task recommendations. The core of multi-task and multi-domain recommendation is to accurately capture user interests in multiple scenarios given multiple user behaviors. In this paper, we propose a plug-and-play \textit{\textbf{P}arameter and \textbf{E}mbedding \textbf{P}ersonalized \textbf{Net}work (\textbf{PEPNet})} for multi-domain and multi-task recommendation. PEPNet takes personalized prior information as input and dynamically scales the bottom-level Embedding and top-level DNN hidden units through gate mechanisms. \textit{Embedding Personalized Network (EPNet)} performs personalized selection on Embedding to fuse features with different importance for different users in multiple domains. \textit{Parameter Personalized Network (PPNet)} executes personalized modification on DNN parameters to balance targets with different sparsity for different users in multiple tasks. We have made a series of special engineering optimizations combining the Kuaishou training framework and the online deployment environment. By infusing personalized selection of Embedding and personalized modification of DNN parameters, PEPNet tailored to the interests of each individual obtains significant performance gains, with online improvements exceeding 1\% in multiple task metrics across multiple domains. We have deployed PEPNet in Kuaishou apps, serving over 300 million users every day.",True,True,"Chang, Jianxin and Zhang, Chenbin and Hui, Yiqun and Leng, Dewei and Niu, Yanan and Song, Yang and Gai, Kun",2023.0,,,,,"PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information",[PDF] PEPNet: Parameter and Embedding Personalized Network ... - arXiv,https://arxiv.org/pdf/2302.01115,Missing: 04/08/2025 "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,fu2023unified-llm-multi-domain-rec,\cite{fu2023unified-llm-multi-domain-rec},A unified framework for multi-domain ctr prediction via large language models,,,True,False,"Fu, Zichuan and Li, Xiangyang and Wu, Chuhan and Wang, Yichao and Dong, Kuicai and Zhao, Xiangyu and Zhao, Mengchen and Guo, Huifeng and Tang, Ruiming",2023.0,,,,ACM Transactions on Information Systems,A unified framework for multi-domain ctr prediction via large language models,[2312.10743] A Unified Framework for Multi-Domain CTR Prediction ...,https://arxiv.org/abs/2312.10743,"[2312.10743] A Unified Framework for Multi-Domain CTR Prediction via Large Language Models Title:A Unified Framework for Multi-Domain CTR Prediction via Large Language Models View a PDF of the paper titled A Unified Framework for Multi-Domain CTR Prediction via Large Language Models, by Zichuan Fu and 8 other authors View a PDF of the paper titled A Unified Framework for Multi-Domain CTR Prediction via Large Language Models, by Zichuan Fu and 8 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle " "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,li2022gromov-cross-domain-rec,\cite{li2022gromov-cross-domain-rec},Gromov-wasserstein guided representation learning for cross-domain recommendation,,,True,False,"Li, Xinhang and Qiu, Zhaopeng and Zhao, Xiangyu and Wang, Zihao and Zhang, Yong and Xing, Chunxiao and Wu, Xian",2022.0,,,,,Gromov-wasserstein guided representation learning for cross-domain recommendation,HestiaSky - GitHub,https://github.com/HestiaSky,GWCDR Public. Repo of CIKM2022 Paper Gromov-Wasserstein Guided Representation Learning forCross-Domain Recommendation. Python 9 1 · OpenSiteRec OpenSiteRec "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,fan2023adversarial-cross-domain-rec,\cite{fan2023adversarial-cross-domain-rec},Adversarial attacks for black-box recommender systems via copying transferable cross-domain user profiles,,,True,False,"Fan, Wenqi and Zhao, Xiangyu and Li, Qing and Derr, Tyler and Ma, Yao and Liu, Hui and Wang, Jianping and Tang, Jiliang",2023.0,,,,IEEE Transactions on Knowledge and Data Engineering,Adversarial attacks for black-box recommender systems via copying transferable cross-domain user profiles,Adversarial Attacks for Black-Box Recommender Systems ...,https://scholars.cityu.edu.hk/en/publications/adversarial-attacks-for-blackbox-recommender-systems-via-copying-transferable-crossdomain-user-profiles(bbbd7461-d6c5-4f43-9217-6ecba620be44)/projects.html,Adversarial Attacks for Black-Box Recommender Systems Via Copying Transferable Cross-Domain User Profiles ; Research output: ; Journal Publications and Reviews › "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,gao2023autotransfer-cross-domain-rec,\cite{gao2023autotransfer-cross-domain-rec},AutoTransfer: Instance transfer for cross-domain recommendations,,,True,False,"Gao, Jingtong and Zhao, Xiangyu and Chen, Bo and Yan, Fan and Guo, Huifeng and Tang, Ruiming",2023.0,,,,,AutoTransfer: Instance transfer for cross-domain recommendations,Instance Transfer for Cross-Domain Recommendations,https://dl.acm.org/doi/pdf/10.1145/3539618.3591701,"by J Gao · 2023 · Cited by 28 — Specifically, AutoTransfer acts as an agent that adaptively selects a subset of informative and transferable instances from the source domain." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,wang2017item,\cite{wang2017item},"Item Silk Road: Recommending Items from Information Domains to Social Users",http://arxiv.org/abs/1706.03205v1,"Online platforms can be divided into information-oriented and social-oriented domains. The former refers to forums or E-commerce sites that emphasize user-item interactions, like Trip.com and Amazon; whereas the latter refers to social networking services (SNSs) that have rich user-user connections, such as Facebook and Twitter. Despite their heterogeneity, these two domains can be bridged by a few overlapping users, dubbed as bridge users. In this work, we address the problem of cross-domain social recommendation, i.e., recommending relevant items of information domains to potential users of social networks. To our knowledge, this is a new problem that has rarely been studied before. Existing cross-domain recommender systems are unsuitable for this task since they have either focused on homogeneous information domains or assumed that users are fully overlapped. Towards this end, we present a novel Neural Social Collaborative Ranking (NSCR) approach, which seamlessly sews up the user-item interactions in information domains and user-user connections in SNSs. In the information domain part, the attributes of users and items are leveraged to strengthen the embedding learning of users and items. In the SNS part, the embeddings of bridge users are propagated to learn the embeddings of other non-bridge users. Extensive experiments on two real-world datasets demonstrate the effectiveness and rationality of our NSCR method.",True,True,"Wang, Xiang and He, Xiangnan and Nie, Liqiang and Chua, Tat-Seng",2017.0,,,,,"Item Silk Road: Recommending Items from Information Domains to Social Users",[PDF] Recommending Items from Information Domains to Social Users,https://hexiangnan.github.io/papers/sigir17-SilkRoad.pdf,"For the modelling of information domain, we build an attribute- aware recommender based on the NCF framework. To fully exploit the interactions among a user, an" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,zhu2024m,\cite{zhu2024m},M-scan: A Multi-Scenario Causal-driven Adaptive Network for Recommendation,,,True,False,"Zhu, Jiachen and Wang, Yichao and Lin, Jianghao and Qin, Jiarui and Tang, Ruiming and Zhang, Weinan and Yu, Yong",2024.0,,,,,M-scan: A Multi-Scenario Causal-driven Adaptive Network for Recommendation,M-scan: A Multi-Scenario Causal-driven Adaptive Network for ... - arXiv,https://arxiv.org/abs/2404.07581,"[2404.07581] M-scan: A Multi-Scenario Causal-driven Adaptive Network for Recommendation **arXiv:2404.07581** (cs) Title:M-scan: A Multi-Scenario Causal-driven Adaptive Network for Recommendation View a PDF of the paper titled M-scan: A Multi-Scenario Causal-driven Adaptive Network for Recommendation, by Jiachen Zhu and 6 other authors View a PDF of the paper titled M-scan: A Multi-Scenario Causal-driven Adaptive Network for Recommendation, by Jiachen Zhu and 6 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,jin2022multi,\cite{jin2022multi},Multi-Scale User Behavior Network for Entire Space Multi-Task Learning,http://arxiv.org/abs/2208.01889v2,"Modelling the user's multiple behaviors is an essential part of modern e-commerce, whose widely adopted application is to jointly optimize click-through rate (CTR) and conversion rate (CVR) predictions. Most of existing methods overlook the effect of two key characteristics of the user's behaviors: for each item list, (i) contextual dependence refers to that the user's behaviors on any item are not purely determinated by the item itself but also are influenced by the user's previous behaviors (e.g., clicks, purchases) on other items in the same sequence; (ii) multiple time scales means that users are likely to click frequently but purchase periodically. To this end, we develop a new multi-scale user behavior network named Hierarchical rEcurrent Ranking On the Entire Space (HEROES) which incorporates the contextual information to estimate the user multiple behaviors in a multi-scale fashion. Concretely, we introduce a hierarchical framework, where the lower layer models the user's engagement behaviors while the upper layer estimates the user's satisfaction behaviors. The proposed architecture can automatically learn a suitable time scale for each layer to capture the dynamic user's behavioral patterns. Besides the architecture, we also introduce the Hawkes process to form a novel recurrent unit which can not only encode the items' features in the context but also formulate the excitation or discouragement from the user's previous behaviors. We further show that HEROES can be extended to build unbiased ranking systems through combinations with the survival analysis technique. Extensive experiments over three large-scale industrial datasets demonstrate the superiority of our model compared with the state-of-the-art methods.",True,True,"Jin, Jiarui and Chen, Xianyu and Zhang, Weinan and Chen, Yuanbo and Jiang, Zaifan and Zhu, Zekun and Su, Zhewen and Yu, Yong",2022.0,,,,,Multi-Scale User Behavior Network for Entire Space Multi-Task Learning,[PDF] Multi-Scale User Behavior Network for Entire Space Multi-Task ...,https://scispace.com/pdf/multi-scale-user-behavior-network-for-entire-space-multi-2poiqr95.pdf,The proposed architecture can auto- matically learn a suitable time scale for each layer to capture the dynamic user's behavioral patterns. Besides the "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,tang2020ple-multi-task-rec,\cite{tang2020ple-multi-task-rec},Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations,,,True,False,"Tang, Hongyan and Liu, Junning and Zhao, Ming and Gong, Xudong",2020.0,,,,,Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations,Progressive layered extraction (PLE): A novel multi-task ... - Papertalk,https://papertalk.org/papertalks/24677,"Progressive layered extraction (PLE): A novel multi-task learning (MTL) model for personalized recommendations. Hongyan Tang, Junning Liu, Ming Zhao, Xudong" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,tong2024mdap,\cite{tong2024mdap},"MDAP: A Multi-view Disentangled and Adaptive Preference Learning Framework for Cross-Domain Recommendation",http://arxiv.org/abs/2410.05877v1,"Cross-domain Recommendation systems leverage multi-domain user interactions to improve performance, especially in sparse data or new user scenarios. However, CDR faces challenges such as effectively capturing user preferences and avoiding negative transfer. To address these issues, we propose the Multi-view Disentangled and Adaptive Preference Learning (MDAP) framework. Our MDAP framework uses a multiview encoder to capture diverse user preferences. The framework includes a gated decoder that adaptively combines embeddings from different views to generate a comprehensive user representation. By disentangling representations and allowing adaptive feature selection, our model enhances adaptability and effectiveness. Extensive experiments on benchmark datasets demonstrate that our method significantly outperforms state-of-the-art CDR and single-domain models, providing more accurate recommendations and deeper insights into user behavior across different domains.",True,True,"Tong, Junxiong and Yin, Mingjia and Wang, Hao and Pan, Qiushi and Lian, Defu and Chen, Enhong",2024.0,,,,,"MDAP: A Multi-view Disentangled and Adaptive Preference Learning Framework for Cross-Domain Recommendation",[论文审查] MDAP: A Multi-view Disentangled and Adaptive ...,https://www.themoonlight.io/zh/review/mdap-a-multi-view-disentangled-and-adaptive-preference-learning-framework-for-cross-domain-recommendation,"该论文提出了一种多视角解耦和自适应偏好学习框架(MDAP),旨在解决跨域推荐(Cross-Domain Recommendation, CDR)中的用户偏好捕捉和负转移等挑战。" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,ning2023multi-multi-domain-graph-rec,\cite{ning2023multi-multi-domain-graph-rec},"Multi-domain Recommendation with Embedding Disentangling and Domain Alignment",http://arxiv.org/abs/2308.05508v2,"Multi-domain recommendation (MDR) aims to provide recommendations for different domains (e.g., types of products) with overlapping users/items and is common for platforms such as Amazon, Facebook, and LinkedIn that host multiple services. Existing MDR models face two challenges: First, it is difficult to disentangle knowledge that generalizes across domains (e.g., a user likes cheap items) and knowledge specific to a single domain (e.g., a user likes blue clothing but not blue cars). Second, they have limited ability to transfer knowledge across domains with small overlaps. We propose a new MDR method named EDDA with two key components, i.e., embedding disentangling recommender and domain alignment, to tackle the two challenges respectively. In particular, the embedding disentangling recommender separates both the model and embedding for the inter-domain part and the intra-domain part, while most existing MDR methods only focus on model-level disentangling. The domain alignment leverages random walks from graph processing to identify similar user/item pairs from different domains and encourages similar user/item pairs to have similar embeddings, enhancing knowledge transfer. We compare EDDA with 12 state-of-the-art baselines on 3 real datasets. The results show that EDDA consistently outperforms the baselines on all datasets and domains. All datasets and codes are available at https://github.com/Stevenn9981/EDDA.",True,True,"Ning, Wentao and Yan, Xiao and Liu, Weiwen and Cheng, Reynold and Zhang, Rui and Tang, Bo",2023.0,,,,,"Multi-domain Recommendation with Embedding Disentangling and Domain Alignment",Multi-domain Recommendation with Embedding ...,https://www.researchgate.net/publication/374904260_Multi-domain_Recommendation_with_Embedding_Disentangling_and_Domain_Alignment,"EDDA (Ning et al. 2023 ) is a recent leading CDR method, consisting of an embedding disentangling recommender and a domain alignment strategy." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,sheng2021star-multi-domain-rec,\cite{sheng2021star-multi-domain-rec},"One Model to Serve All: Star Topology Adaptive Recommender for Multi-Domain CTR Prediction",http://arxiv.org/abs/2101.11427v5,"Traditional industrial recommenders are usually trained on a single business domain and then serve for this domain. However, in large commercial platforms, it is often the case that the recommenders need to make click-through rate (CTR) predictions for multiple business domains. Different domains have overlapping user groups and items. Thus, there exist commonalities. Since the specific user groups have disparity and the user behaviors may change in various business domains, there also have distinctions. The distinctions result in domain-specific data distributions, making it hard for a single shared model to work well on all domains. To learn an effective and efficient CTR model to handle multiple domains simultaneously, we present Star Topology Adaptive Recommender (STAR). Concretely, STAR has the star topology, which consists of the shared centered parameters and domain-specific parameters. The shared parameters are applied to learn commonalities of all domains, and the domain-specific parameters capture domain distinction for more refined prediction. Given requests from different business domains, STAR can adapt its parameters conditioned on the domain characteristics. The experimental result from production data validates the superiority of the proposed STAR model. Since 2020, STAR has been deployed in the display advertising system of Alibaba, obtaining averaging 8.0% improvement on CTR and 6.0% on RPM (Revenue Per Mille).",True,True,"Sheng, Xiang-Rong and Zhao, Liqin and Zhou, Guorui and Ding, Xinyao and Dai, Binding and Luo, Qiang and Yang, Siran and Lv, Jingshan and Zhang, Chi and Deng, Hongbo and others",2021.0,,,,,"One Model to Serve All: Star Topology Adaptive Recommender for Multi-Domain CTR Prediction",One Model to Serve All: Star Topology Adaptive Recommender for ...,https://www.semanticscholar.org/paper/One-Model-to-Serve-All%3A-Star-Topology-Adaptive-for-Sheng-Zhao/4823b385d45b2b33b3ca613813efad3aa84b3dd4,The Star Topology Adaptive Recommender (STAR) model is proposed to train a single model to serve all domains by leveraging data from all domains "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,yan2022apg-rec,\cite{yan2022apg-rec},"APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction",http://arxiv.org/abs/2203.16218v3,"In many web applications, deep learning-based CTR prediction models (deep CTR models for short) are widely adopted. Traditional deep CTR models learn patterns in a static manner, i.e., the network parameters are the same across all the instances. However, such a manner can hardly characterize each of the instances which may have different underlying distributions. It actually limits the representation power of deep CTR models, leading to sub-optimal results. In this paper, we propose an efficient, effective, and universal module, named as Adaptive Parameter Generation network (APG), which can dynamically generate parameters for deep CTR models on-the-fly based on different instances. Extensive experimental evaluation results show that APG can be applied to a variety of deep CTR models and significantly improve their performance. Meanwhile, APG can reduce the time cost by 38.7\% and memory usage by 96.6\% compared to a regular deep CTR model. We have deployed APG in the industrial sponsored search system and achieved 3\% CTR gain and 1\% RPM gain respectively.",True,True,"Yan, Bencheng and Wang, Pengjie and Zhang, Kai and Li, Feng and Deng, Hongbo and Xu, Jian and Zheng, Bo",2022.0,,,,Advances in Neural Information Processing Systems,"APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction",APG: Adaptive Parameter Generation Network for Click ...,https://arxiv.org/abs/2203.16218,"**arXiv:2203.16218** (cs) View a PDF of the paper titled APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction, by Bencheng Yan and 6 other authors In this paper, we propose an efficient, effective, and universal module, named as Adaptive Parameter Generation network (APG), which can dynamically generate parameters for deep CTR models on-the-fly based on different instances. View a PDF of the paper titled APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction, by Bencheng Yan and 6 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,bian2020can-rec,\cite{bian2020can-rec},CAN: Feature Co-Action for Click-Through Rate Prediction,http://arxiv.org/abs/2011.05625v3,"Feature interaction has been recognized as an important problem in machine learning, which is also very essential for click-through rate (CTR) prediction tasks. In recent years, Deep Neural Networks (DNNs) can automatically learn implicit nonlinear interactions from original sparse features, and therefore have been widely used in industrial CTR prediction tasks. However, the implicit feature interactions learned in DNNs cannot fully retain the complete representation capacity of the original and empirical feature interactions (e.g., cartesian product) without loss. For example, a simple attempt to learn the combination of feature A and feature B as the explicit cartesian product representation of new features can outperform previous implicit feature interaction models including factorization machine (FM)-based models and their variations. In this paper, we propose a Co-Action Network (CAN) to approximate the explicit pairwise feature interactions without introducing too many additional parameters. More specifically, giving feature A and its associated feature B, their feature interaction is modeled by learning two sets of parameters: 1) the embedding of feature A, and 2) a Multi-Layer Perceptron (MLP) to represent feature B. The approximated feature interaction can be obtained by passing the embedding of feature A through the MLP network of feature B. We refer to such pairwise feature interaction as feature co-action, and such a Co-Action Network unit can provide a very powerful capacity to fitting complex feature interactions. Experimental results on public and industrial datasets show that CAN outperforms state-of-the-art CTR models and the cartesian product method. Moreover, CAN has been deployed in the display advertisement system in Alibaba, obtaining 12\% improvement on CTR and 8\% on Revenue Per Mille (RPM), which is a great improvement to the business.",True,True,"Bian, Weijie and Wu, Kailun and Ren, Lejian and Pi, Qi and Zhang, Yujing and Xiao, Can and Sheng, Xiang-Rong and Zhu, Yong-Nan and Chan, Zhangming and Mou, Na and others",2020.0,,,,arXiv preprint arXiv:2011.05625,CAN: Feature Co-Action for Click-Through Rate Prediction,CAN: Feature Co-Action for Click-Through Rate Prediction,https://arxiv.org/pdf/2011.05625,"by W Bian · 2020 · Cited by 9 — CAN is a feature co-action method for click-through rate (CTR) prediction, addressing feature interaction, which is essential for CTR" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,zhang2022m2m-multi-domain-multi-task-rec,\cite{zhang2022m2m-multi-domain-multi-task-rec},"Leaving No One Behind: A Multi-Scenario Multi-Task Meta Learning Approach for Advertiser Modeling",http://arxiv.org/abs/2201.06814v2,"Advertisers play an essential role in many e-commerce platforms like Taobao and Amazon. Fulfilling their marketing needs and supporting their business growth is critical to the long-term prosperity of platform economies. However, compared with extensive studies on user modeling such as click-through rate predictions, much less attention has been drawn to advertisers, especially in terms of understanding their diverse demands and performance. Different from user modeling, advertiser modeling generally involves many kinds of tasks (e.g. predictions of advertisers' expenditure, active-rate, or total impressions of promoted products). In addition, major e-commerce platforms often provide multiple marketing scenarios (e.g. Sponsored Search, Display Ads, Live Streaming Ads) while advertisers' behavior tend to be dispersed among many of them. This raises the necessity of multi-task and multi-scenario consideration in comprehensive advertiser modeling, which faces the following challenges: First, one model per scenario or per task simply doesn't scale; Second, it is particularly hard to model new or minor scenarios with limited data samples; Third, inter-scenario correlations are complicated, and may vary given different tasks. To tackle these challenges, we propose a multi-scenario multi-task meta learning approach (M2M) which simultaneously predicts multiple tasks in multiple advertising scenarios.",True,True,"Zhang, Qianqian and Liao, Xinru and Liu, Quan and Xu, Jian and Zheng, Bo",2022.0,,,,,"Leaving No One Behind: A Multi-Scenario Multi-Task Meta Learning Approach for Advertiser Modeling",A Multi-Scenario Multi-Task Meta Learning Approach for ...,https://dl.acm.org/doi/pdf/10.1145/3488560.3498479,by Q Zhang · 2022 · Cited by 63 — Leaving. No One Behind: A Multi-Scenario Multi-Task Meta Learning Approach for Advertiser Modeling. In Proceedings of the Fifteenth ACM "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,wang2023plate-multi-domain-rec,\cite{wang2023plate-multi-domain-rec},PLATE: A prompt-enhanced paradigm for multi-scenario recommendations,,,True,False,"Wang, Yuhao and Zhao, Xiangyu and Chen, Bo and Liu, Qidong and Guo, Huifeng and Liu, Huanshuo and Wang, Yichao and Zhang, Rui and Tang, Ruiming",2023.0,,,,,PLATE: A prompt-enhanced paradigm for multi-scenario recommendations,PLATE: A Prompt-Enhanced Paradigm for Multi-Scenario ...,https://dl.acm.org/doi/10.1145/3539618.3591750,"In this work, we propose a novel prompt-enhanced paradigm for multi-scenario recommendation. Specifically, a unified DRS backbone model is first" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,wang2024diff-cold-multi-domain-rec,\cite{wang2024diff-cold-multi-domain-rec},Diff-MSR: A diffusion model enhanced paradigm for cold-start multi-scenario recommendation,,,True,False,"Wang, Yuhao and Liu, Ziru and Wang, Yichao and Zhao, Xiangyu and Chen, Bo and Guo, Huifeng and Tang, Ruiming",2024.0,,,,,Diff-MSR: A diffusion model enhanced paradigm for cold-start multi-scenario recommendation,Applied-Machine-Learning-Lab/Diff-MSR,https://github.com/Applied-Machine-Learning-Lab/Diff-MSR,"GitHub - Applied-Machine-Learning-Lab/Diff-MSR: Code for 'Diff-MSR: A Diffusion Model Enhanced Paradigm for Cold-Start Multi-Scenario Recommendation' accepted to WSDM 2024 | douban_diff.py | douban_diff.py | Add files via upload | Jan 11, 2024 | **Run**: python douban_diff.py --indexx 0 --batch_size 512 --learning_rate 1e-3 --T 500 --beta 0.0002 --schedule other --objective pred_v --auto_normalize 0 --job 2 --model_name fnn python douban_diff1.py --indexx 0 --batch_size 512 --learning_rate 1e-3 --T 500 --beta 0.0002 --schedule other --objective pred_v --auto_normalize 0 --job 2 --model_name fnn **Run**: python douban_classifier_adversarial.py --beta 0.0002 --schedule other --objective pred_v --step 70 --T 500 --job 2 --model_name fnn --learning_rate 1e-3 **Run**: python douban_augment_final_v3.py --indexx 0 --learning_rate 2e-3 --T 500 --beta 0.0002 --schedule other --objective pred_v --auto_normalize 0 --job 1 --step1 30 --step2 50 --model_name fnn" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,zhu2022user,\cite{zhu2022user},User-tag profile modeling in recommendation system via contrast weighted tag masking,,,True,False,"Zhu, Chenxu and Du, Peng and Zhu, Xianghui and Zhang, Weinan and Yu, Yong and Cao, Yang",2022.0,,,,,User-tag profile modeling in recommendation system via contrast weighted tag masking,Xianghui Zhu - Home - ACM Digital Library,https://dl.acm.org/profile/99660549247,"User-tag Profile Modeling in Recommendation System via Contrast Weighted Tag Masking · Author Picture Chenxu Zhu. Shanghai Jiao Tong University, Shanghai, China." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,zhou2022filter,\cite{zhou2022filter},Filter-enhanced MLP is All You Need for Sequential Recommendation,http://arxiv.org/abs/2202.13556v1,"Recently, deep neural networks such as RNN, CNN and Transformer have been applied in the task of sequential recommendation, which aims to capture the dynamic preference characteristics from logged user behavior data for accurate recommendation. However, in online platforms, logged user behavior data is inevitable to contain noise, and deep recommendation models are easy to overfit on these logged data. To tackle this problem, we borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain. In our empirical experiments, we find that filtering algorithms can substantially improve representative sequential recommendation models, and integrating simple filtering algorithms (eg Band-Stop Filter) with an all-MLP architecture can even outperform competitive Transformer-based models. Motivated by it, we propose \textbf{FMLP-Rec}, an all-MLP model with learnable filters for sequential recommendation task. The all-MLP architecture endows our model with lower time complexity, and the learnable filters can adaptively attenuate the noise information in the frequency domain. Extensive experiments conducted on eight real-world datasets demonstrate the superiority of our proposed method over competitive RNN, CNN, GNN and Transformer-based methods. Our code and data are publicly available at the link: \textcolor{blue}{\url{https://github.com/RUCAIBox/FMLP-Rec}}.",True,True,"Zhou, Kun and Yu, Hui and Zhao, Wayne Xin and Wen, Ji-Rong",2022.0,,,,,Filter-enhanced MLP is All You Need for Sequential Recommendation,Filter-enhanced MLP is All You Need for Sequential Recommendation,https://dl.acm.org/doi/10.1145/3485447.3512111,"We propose FMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task. The all-MLP architecture endows our model with lower time" "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,standley2020tasks-multi-task,\cite{standley2020tasks-multi-task},Which Tasks Should Be Learned Together in Multi-task Learning?,http://arxiv.org/abs/1905.07553v4,"Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using multi-task learning. This can save computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives can compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We study task cooperation and competition in several different learning settings and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.",True,True,"Standley, Trevor and Zamir, Amir and Chen, Dawn and Guibas, Leonidas and Malik, Jitendra and Savarese, Silvio",2020.0,,,,,Which Tasks Should Be Learned Together in Multi-task Learning?,Which Tasks Should Be Learned Together in Multi-task ...,https://arxiv.org/abs/1905.07553,"Image 4: arxiv logo>cs> arXiv:1905.07553 **arXiv:1905.07553** (cs) Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, Silvio Savarese View a PDF of the paper titled Which Tasks Should Be Learned Together in Multi-task Learning?, by Trevor Standley and 4 other authors View a PDF of the paper titled Which Tasks Should Be Learned Together in Multi-task Learning?, by Trevor Standley and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,he2024efficient-multi-modal,\cite{he2024efficient-multi-modal},Efficient Modality Selection in Multimodal Learning,,,True,False,"He, Yifei and Cheng, Runxiang and Balasubramaniam, Gargi and Tsai, Yao-Hung Hubert and Zhao, Han",2024.0,,,,Journal of Machine Learning Research,Efficient Modality Selection in Multimodal Learning,Efficient Modality Selection in Multimodal Learning,https://jmlr.org/papers/v25/23-0439.html,"Efficient Modality Selection in Multimodal Learning Efficient Modality Selection in Multimodal Learning Multimodal learning aims to learn from data of different modalities by fusing information from heterogeneous sources. In this paper, we study the modality selection problem, which aims to select the most useful subset of modalities for learning under a cardinality constraint. To that end, we propose a unified theoretical framework to quantify the learning utility of modalities, and we identify dependence assumptions to flexibly model the heterogeneous nature of multimodal data, which also allows efficient algorithm design. We demonstrate the efficacy of our theoretical results and modality selection algorithms on 2 synthetic and 4 real-world data sets on a diverse range of multimodal data." "Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation",2505.20227v1,park2024pacer-cross-domain-rec,\cite{park2024pacer-cross-domain-rec},"Pacer and Runner: Cooperative Learning Framework between Single- and Cross-Domain Sequential Recommendation",http://arxiv.org/abs/2407.11245v2,"Cross-Domain Sequential Recommendation (CDSR) improves recommendation performance by utilizing information from multiple domains, which contrasts with Single-Domain Sequential Recommendation (SDSR) that relies on a historical interaction within a specific domain. However, CDSR may underperform compared to the SDSR approach in certain domains due to negative transfer, which occurs when there is a lack of relation between domains or different levels of data sparsity. To address the issue of negative transfer, our proposed CDSR model estimates the degree of negative transfer of each domain and adaptively assigns it as a weight factor to the prediction loss, to control gradient flows through domains with significant negative transfer. To this end, our model compares the performance of a model trained on multiple domains (CDSR) with a model trained solely on the specific domain (SDSR) to evaluate the negative transfer of each domain using our asymmetric cooperative network. In addition, to facilitate the transfer of valuable cues between the SDSR and CDSR tasks, we developed an auxiliary loss that maximizes the mutual information between the representation pairs from both tasks on a per-domain basis. This cooperative learning between SDSR and CDSR tasks is similar to the collaborative dynamics between pacers and runners in a marathon. Our model outperformed numerous previous works in extensive experiments on two real-world industrial datasets across ten service domains. We also have deployed our model in the recommendation system of our personal assistant app service, resulting in 21.4% increase in click-through rate compared to existing models, which is valuable to real-world business.",True,True,"Park, Chung and Kim, Taesan and Yoon, Hyungjun and Hong, Junui and Yu, Yelim and Cho, Mincheol and Choi, Minsung and Choo, Jaegul",2024.0,,,,,"Pacer and Runner: Cooperative Learning Framework between Single- and Cross-Domain Sequential Recommendation",Pacer and Runner: Cooperative Learning Framework between Single,https://www.researchgate.net/publication/382302484_Pacer_and_Runner_Cooperative_Learning_Framework_between_Single-_and_Cross-Domain_Sequential_Recommendation,"Cross-Domain Sequential Recommendation (CDSR) improves recommendation performance by utilizing information from multiple domains, which contrasts with" "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,Robertson2009ThePR,\cite{Robertson2009ThePR},The Probabilistic Relevance Framework: {BM25} and Beyond,,,True,False,Stephen E. Robertson and Hugo Zaragoza,2009.0,,,,,The Probabilistic Relevance Framework: {BM25} and Beyond,The Probabilistic Relevance Framework: BM25 and Beyond,https://dl.acm.org/doi/10.1561/1500000019,* Zhang Q Xiao F Li T Lin L Fang H Sun H Liu R Zhu X Wang J Long G Blumestein M Chang Y Lewin-Eytan L Huang H Yom-Tov E(2025)Efficient Integration of ASR with Large Language Models to Enhance Video Search at Scale Companion Proceedings of the ACM on Web Conference 2025 10.1145/3701716.3715220(601-610)Online publication date: 8-May-2025https://dl.acm.org/doi/10.1145/3701716.3715220 * Shawon A Liscano R Azim A Sundaresan V Chang Y Litoiu M Smirni E Papadopoulos A Wolter K(2025)Retrieval Augmented Generation Fine-Tuned LLM Model for Code Recommendations to Mitigate Lock Contention Companion of the 16th ACM/SPEC International Conference on Performance Engineering 10.1145/3680256.3721324(95-102)Online publication date: 5-May-2025https://dl.acm.org/doi/10.1145/3680256.3721324 "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,formal2021splade,\cite{formal2021splade},SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking,,,True,False,"Formal, Thibault and Piwowarski, Benjamin and Clinchant, St{\'e}phane",2021.0,,,,,SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking,SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking,http://arxiv.org/pdf/2107.05720v1,"In neural Information Retrieval, ongoing research is directed towards improving the first retriever in ranking pipelines. Learning dense embeddings to conduct retrieval using efficient approximate nearest neighbors methods has proven to work well. Meanwhile, there has been a growing interest in learning sparse representations for documents and queries, that could inherit from the desirable properties of bag-of-words models such as the exact matching of terms and the efficiency of inverted indexes. In this work, we present a new first-stage ranker based on explicit sparsity regularization and a log-saturation effect on term weights, leading to highly sparse representations and competitive results with respect to state-of-the-art dense and sparse methods. Our approach is simple, trained end-to-end in a single stage. We also explore the trade-off between effectiveness and efficiency, by controlling the contribution of the sparsity regularization." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,formal2021splade-v2,\cite{formal2021splade-v2},SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval,http://arxiv.org/abs/2109.10086v1,"In neural Information Retrieval (IR), ongoing research is directed towards improving the first retriever in ranking pipelines. Learning dense embeddings to conduct retrieval using efficient approximate nearest neighbors methods has proven to work well. Meanwhile, there has been a growing interest in learning \emph{sparse} representations for documents and queries, that could inherit from the desirable properties of bag-of-words models such as the exact matching of terms and the efficiency of inverted indexes. Introduced recently, the SPLADE model provides highly sparse representations and competitive results with respect to state-of-the-art dense and sparse approaches. In this paper, we build on SPLADE and propose several significant improvements in terms of effectiveness and/or efficiency. More specifically, we modify the pooling mechanism, benchmark a model solely based on document expansion, and introduce models trained with distillation. We also report results on the BEIR benchmark. Overall, SPLADE is considerably improved with more than $9$\% gains on NDCG@10 on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark.",True,True,"Formal, Thibault and Lassance, Carlos and Piwowarski, Benjamin and Clinchant, St{\'e}phane",2021.0,,,,arXiv preprint arXiv:2109.10086,SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval,SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval,http://arxiv.org/pdf/2109.10086v1,"In neural Information Retrieval (IR), ongoing research is directed towards improving the first retriever in ranking pipelines. Learning dense embeddings to conduct retrieval using efficient approximate nearest neighbors methods has proven to work well. Meanwhile, there has been a growing interest in learning \emph{sparse} representations for documents and queries, that could inherit from the desirable properties of bag-of-words models such as the exact matching of terms and the efficiency of inverted indexes. Introduced recently, the SPLADE model provides highly sparse representations and competitive results with respect to state-of-the-art dense and sparse approaches. In this paper, we build on SPLADE and propose several significant improvements in terms of effectiveness and/or efficiency. More specifically, we modify the pooling mechanism, benchmark a model solely based on document expansion, and introduce models trained with distillation. We also report results on the BEIR benchmark. Overall, SPLADE is considerably improved with more than $9$\% gains on NDCG@10 on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,dai2020context,\cite{dai2020context},Context-aware term weighting for first stage passage retrieval,,,True,False,"Dai, Zhuyun and Callan, Jamie",2020.0,,,,,Context-aware term weighting for first stage passage retrieval,Context-Aware Sentence/Passage Term Importance Estimation For ...,https://arxiv.org/abs/1910.10687,"**arXiv:1910.10687** (cs) View a PDF of the paper titled Context-Aware Sentence/Passage Term Importance Estimation For First Stage Retrieval, by Zhuyun Dai and Jamie Callan This paper proposes a Deep Contextualized Term Weighting framework that learns to map BERT's contextualized text representations to context-aware term weights for sentences and passages. View a PDF of the paper titled Context-Aware Sentence/Passage Term Importance Estimation For First Stage Retrieval, by Zhuyun Dai and Jamie Callan - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,johnson2019billion,\cite{johnson2019billion},Billion-scale similarity search with GPUs,http://arxiv.org/abs/1702.08734v1,"Similarity search finds application in specialized database systems handling complex data such as images or videos, which are typically represented by high-dimensional features and require specific indexing structures. This paper tackles the problem of better utilizing GPUs for this task. While GPUs excel at data-parallel tasks, prior approaches are bottlenecked by algorithms that expose less parallelism, such as k-min selection, or make poor use of the memory hierarchy. We propose a design for k-selection that operates at up to 55% of theoretical peak performance, enabling a nearest neighbor implementation that is 8.5x faster than prior GPU state of the art. We apply it in different similarity search scenarios, by proposing optimized design for brute-force, approximate and compressed-domain search based on product quantization. In all these setups, we outperform the state of the art by large margins. Our implementation enables the construction of a high accuracy k-NN graph on 95 million images from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced our approach for the sake of comparison and reproducibility.",True,True,"Johnson, Jeff and Douze, Matthijs and J{\'e}gou, Herv{\'e}",2019.0,,,,IEEE Transactions on Big Data,Billion-scale similarity search with GPUs,[PDF] Billion-scale similarity search with GPUs,https://www.eecs.umich.edu/courses/cse584/static_files/papers/1702.08734.pdf,"ABSTRACT. Similarity search finds application in specialized database systems handling complex data such as images or videos,." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,karpukhin-etal-2020-dense,\cite{karpukhin-etal-2020-dense},Dense Passage Retrieval for Open-Domain Question Answering,http://arxiv.org/abs/2004.04906v3,"Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.",True,True,"Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",2020.0,,,,,Dense Passage Retrieval for Open-Domain Question Answering,[2004.04906] Dense Passage Retrieval for Open-Domain ...,https://arxiv.org/abs/2004.04906,"**arXiv:2004.04906** (cs) Authors:Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,Xiong2020ApproximateNN,\cite{Xiong2020ApproximateNN},"Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval",http://arxiv.org/abs/2007.00808v2,"Conducting text retrieval in a dense learned representation space has many intriguing advantages over sparse retrieval. Yet the effectiveness of dense retrieval (DR) often requires combination with sparse retrieval. In this paper, we identify that the main bottleneck is in the training mechanisms, where the negative instances used in training are not representative of the irrelevant documents in testing. This paper presents Approximate nearest neighbor Negative Contrastive Estimation (ANCE), a training mechanism that constructs negatives from an Approximate Nearest Neighbor (ANN) index of the corpus, which is parallelly updated with the learning process to select more realistic negative training instances. This fundamentally resolves the discrepancy between the data distribution used in the training and testing of DR. In our experiments, ANCE boosts the BERT-Siamese DR model to outperform all competitive dense and sparse retrieval baselines. It nearly matches the accuracy of sparse-retrieval-and-BERT-reranking using dot-product in the ANCE-learned representation space and provides almost 100x speed-up.",True,True,Lee Xiong and Chenyan Xiong and Ye Li and Kwok-Fung Tang and Jialin Liu and Paul Bennett and Junaid Ahmed and Arnold Overwijk,2021.0,,,,,"Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval",Approximate Nearest Neighbor Negative Contrastive Learning for...,https://openreview.net/forum?id=zeFrfgyZln,"This paper improves the learning of dense text retrieval using ANCE, which selects global negatives with bigger gradient norms using an asynchronously updated" "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,mbert,\cite{mbert},How multilingual is Multilingual BERT?,http://arxiv.org/abs/1906.01502v1,"In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2018) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.",True,True,"Pires, Telmo and Schlinger, Eva and Garrette, Dan",2019.0,,https://aclanthology.org/P19-1493/,10.18653/v1/P19-1493,,How multilingual is Multilingual BERT?,How multilingual is Multilingual BERT?,http://arxiv.org/pdf/1906.01502v1,"In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2018) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,XLM-R,\cite{XLM-R},Unsupervised Cross-lingual Representation Learning at Scale,http://arxiv.org/abs/1911.02116v2,"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.",True,True,"Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin",2020.0,,https://aclanthology.org/2020.acl-main.747/,10.18653/v1/2020.acl-main.747,,Unsupervised Cross-lingual Representation Learning at Scale,[PDF] Unsupervised Cross-lingual Representation Learning at Scale,https://aclanthology.org/2020.acl-main.747.pdf,This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-. "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,SERENGETI,\cite{SERENGETI},SERENGETI: Massively Multilingual Language Models for Africa,http://arxiv.org/abs/2212.10785v2,"Multilingual pretrained language models (mPLMs) acquire valuable, generalizable linguistic information during pretraining and have advanced the state of the art on task-specific finetuning. To date, only ~31 out of ~2,000 African languages are covered in existing language models. We ameliorate this limitation by developing SERENGETI, a massively multilingual language model that covers 517 African languages and language varieties. We evaluate our novel models on eight natural language understanding tasks across 20 datasets, comparing to 4 mPLMs that cover 4-23 African languages. SERENGETI outperforms other models on 11 datasets across the eights tasks, achieving 82.27 average F_1. We also perform analyses of errors from our models, which allows us to investigate the influence of language genealogy and linguistic similarity when the models are applied under zero-shot settings. We will publicly release our models for research.\footnote{\href{https://github.com/UBC-NLP/serengeti}{https://github.com/UBC-NLP/serengeti}}",True,True,"Adebara, Ife and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad and Alcoba Inciarte, Alcides",2023.0,,https://aclanthology.org/2023.findings-acl.97/,10.18653/v1/2023.findings-acl.97,,SERENGETI: Massively Multilingual Language Models for Africa,SERENGETI: Massively Multilingual Language Models for Africa,http://arxiv.org/pdf/2212.10785v2,"Multilingual pretrained language models (mPLMs) acquire valuable, generalizable linguistic information during pretraining and have advanced the state of the art on task-specific finetuning. To date, only ~31 out of ~2,000 African languages are covered in existing language models. We ameliorate this limitation by developing SERENGETI, a massively multilingual language model that covers 517 African languages and language varieties. We evaluate our novel models on eight natural language understanding tasks across 20 datasets, comparing to 4 mPLMs that cover 4-23 African languages. SERENGETI outperforms other models on 11 datasets across the eights tasks, achieving 82.27 average F_1. We also perform analyses of errors from our models, which allows us to investigate the influence of language genealogy and linguistic similarity when the models are applied under zero-shot settings. We will publicly release our models for research.\footnote{\href{https://github.com/UBC-NLP/serengeti}{https://github.com/UBC-NLP/serengeti}}" "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,AfriBERTa,\cite{AfriBERTa},Small Data? {No} Problem! {Exploring} the Viability of Pretrained Multilingual Language Models for Low-resourced Languages,,,True,False,"Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy",2021.0,,https://aclanthology.org/2021.mrl-1.11/,10.18653/v1/2021.mrl-1.11,,Small Data? {No} Problem! {Exploring} the Viability of Pretrained Multilingual Language Models for Low-resourced Languages,[PDF] Small Data? No Problem! Exploring the Viability of Pretrained ...,https://cs.uwaterloo.ca/~jimmylin/publications/Ogueji_etal_MRL2021.pdf,Missing: 04/08/2025 "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,Izacard2021UnsupervisedDI,\cite{Izacard2021UnsupervisedDI},Unsupervised Dense Information Retrieval with Contrastive Learning,http://arxiv.org/abs/2112.09118v4,"Recently, information retrieval has seen the emergence of dense retrievers, using neural networks, as an alternative to classical sparse methods based on term-frequency. These models have obtained state-of-the-art results on datasets and tasks where large training sets are available. However, they do not transfer well to new applications with no training data, and are outperformed by unsupervised term-frequency methods such as BM25. In this work, we explore the limits of contrastive learning as a way to train unsupervised dense retrievers and show that it leads to strong performance in various retrieval settings. On the BEIR benchmark our unsupervised model outperforms BM25 on 11 out of 15 datasets for the Recall@100. When used as pre-training before fine-tuning, either on a few thousands in-domain examples or on the large MS~MARCO dataset, our contrastive model leads to improvements on the BEIR benchmark. Finally, we evaluate our approach for multi-lingual retrieval, where training data is even scarcer than for English, and show that our approach leads to strong unsupervised performance. Our model also exhibits strong cross-lingual transfer when fine-tuned on supervised English data only and evaluated on low resources language such as Swahili. We show that our unsupervised models can perform cross-lingual retrieval between different scripts, such as retrieving English documents from Arabic queries, which would not be possible with term matching methods.",True,True,Gautier Izacard and Mathilde Caron and Lucas Hosseini and Sebastian Riedel and Piotr Bojanowski and Armand Joulin and Edouard Grave,2022.0,,,,,Unsupervised Dense Information Retrieval with Contrastive Learning,Unsupervised Dense Information Retrieval with Contrastive Learning,http://arxiv.org/pdf/2112.09118v4,"Recently, information retrieval has seen the emergence of dense retrievers, using neural networks, as an alternative to classical sparse methods based on term-frequency. These models have obtained state-of-the-art results on datasets and tasks where large training sets are available. However, they do not transfer well to new applications with no training data, and are outperformed by unsupervised term-frequency methods such as BM25. In this work, we explore the limits of contrastive learning as a way to train unsupervised dense retrievers and show that it leads to strong performance in various retrieval settings. On the BEIR benchmark our unsupervised model outperforms BM25 on 11 out of 15 datasets for the Recall@100. When used as pre-training before fine-tuning, either on a few thousands in-domain examples or on the large MS~MARCO dataset, our contrastive model leads to improvements on the BEIR benchmark. Finally, we evaluate our approach for multi-lingual retrieval, where training data is even scarcer than for English, and show that our approach leads to strong unsupervised performance. Our model also exhibits strong cross-lingual transfer when fine-tuned on supervised English data only and evaluated on low resources language such as Swahili. We show that our unsupervised models can perform cross-lingual retrieval between different scripts, such as retrieving English documents from Arabic queries, which would not be possible with term matching methods." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,rust-etal-2021-good,\cite{rust-etal-2021-good},"How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models",http://arxiv.org/abs/2012.15613v2,"In this work, we provide a systematic and comprehensive empirical comparison of pretrained multilingual language models versus their monolingual counterparts with regard to their monolingual task performance. We study a set of nine typologically diverse languages with readily available pretrained monolingual models on a set of five diverse monolingual downstream tasks. We first aim to establish, via fair and controlled comparisons, if a gap between the multilingual and the corresponding monolingual representation of that language exists, and subsequently investigate the reason for any performance difference. To disentangle conflating factors, we train new monolingual models on the same data, with monolingually and multilingually trained tokenizers. We find that while the pretraining data size is an important factor, a designated monolingual tokenizer plays an equally important role in the downstream performance. Our results show that languages that are adequately represented in the multilingual model's vocabulary exhibit negligible performance decreases over their monolingual counterparts. We further find that replacing the original multilingual tokenizer with the specialized monolingual tokenizer improves the downstream performance of the multilingual model for almost every task and language.",True,True,"Rust, Phillip and Pfeiffer, Jonas and Vuli{\'c}, Ivan and Ruder, Sebastian and Gurevych, Iryna",2021.0,,https://aclanthology.org/2021.acl-long.243/,10.18653/v1/2021.acl-long.243,,"How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models",[PDF] How Good is Your Tokenizer? On the Monolingual Performance of ...,https://aclanthology.org/2021.acl-long.243.pdf,"Finally, we select a set of 9 languages from 8 language families, as listed in Table 1.3 We evalu-ate mBERT and monolingual BERT models on five downstream NLP tasks: named entity recognition (NER), sentiment analysis (SA), question answer-ing (QA), universal dependency parsing (UDP), and part-of-speech tagging (POS).4 3Note that, since we evaluate monolingual performance and not cross-lingual transfer performance, we require train-ing data in the target language. Sim-ilar to how the chosen method of tokenization af-fects neural machine translation quality (Domingo et al., 2019), these results establish that, in fact, the designated pretrained tokenizer plays a funda-mental role in the monolingual downstream task performance of contemporary LMs. In 18/24 language and task settings, the mono-lingual model from prior work (trained on more data) outperforms its corresponding MONOMODEL-MONOTOK model." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,yu2024arctic,\cite{yu2024arctic},Arctic-Embed 2.0: Multilingual Retrieval Without Compromise,http://arxiv.org/abs/2412.04506v2,"This paper presents the training methodology of Arctic-Embed 2.0, a set of open-source text embedding models built for accurate and efficient multilingual retrieval. While prior works have suffered from degraded English retrieval quality, Arctic-Embed 2.0 delivers competitive retrieval quality on multilingual and English-only benchmarks, and supports Matryoshka Representation Learning (MRL) for efficient embedding storage with significantly lower compressed quality degradation compared to alternatives. We detail the design and implementation, presenting several important open research questions that arose during model development. We conduct experiments exploring these research questions and include extensive discussion aimed at fostering further discussion in this field.",True,True,"Yu, Puxuan and Merrick, Luke and Nuti, Gaurav and Campos, Daniel",2024.0,,,,arXiv preprint arXiv:2412.04506,Arctic-Embed 2.0: Multilingual Retrieval Without Compromise,Arctic-Embed 2.0: Multilingual Retrieval Without Compromise,https://arxiv.org/abs/2412.04506,"by P Yu · 2024 · Cited by 25 — This paper presents the training methodology of Arctic-Embed 2.0, a set of open-source text embedding models built for accurate and efficient multilingual" "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,wang2024multilingual,\cite{wang2024multilingual},Multilingual E5 Text Embeddings: A Technical Report,http://arxiv.org/abs/2402.05672v1,"This technical report presents the training methodology and evaluation results of the open-source multilingual E5 text embedding models, released in mid-2023. Three embedding models of different sizes (small / base / large) are provided, offering a balance between the inference efficiency and embedding quality. The training procedure adheres to the English E5 model recipe, involving contrastive pre-training on 1 billion multilingual text pairs, followed by fine-tuning on a combination of labeled datasets. Additionally, we introduce a new instruction-tuned embedding model, whose performance is on par with state-of-the-art, English-only models of similar sizes. Information regarding the model release can be found at https://github.com/microsoft/unilm/tree/master/e5 .",True,True,"Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu",2024.0,,,,arXiv preprint arXiv:2402.05672,Multilingual E5 Text Embeddings: A Technical Report,Multilingual E5 Text Embeddings: A Technical Report,http://arxiv.org/pdf/2402.05672v1,"This technical report presents the training methodology and evaluation results of the open-source multilingual E5 text embedding models, released in mid-2023. Three embedding models of different sizes (small / base / large) are provided, offering a balance between the inference efficiency and embedding quality. The training procedure adheres to the English E5 model recipe, involving contrastive pre-training on 1 billion multilingual text pairs, followed by fine-tuning on a combination of labeled datasets. Additionally, we introduce a new instruction-tuned embedding model, whose performance is on par with state-of-the-art, English-only models of similar sizes. Information regarding the model release can be found at https://github.com/microsoft/unilm/tree/master/e5 ." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,nigatu2024searched,\cite{nigatu2024searched},"""I Searched for a Religious Song in Amharic and Got Sexual Content Instead"": Investigating Online Harm in Low-Resourced Languages on YouTube",http://arxiv.org/abs/2405.16656v1,"Online social media platforms such as YouTube have a wide, global reach. However, little is known about the experience of low-resourced language speakers on such platforms; especially in how they experience and navigate harmful content. To better understand this, we (1) conducted semi-structured interviews (n=15) and (2) analyzed search results (n=9313), recommendations (n=3336), channels (n=120) and comments (n=406) of policy-violating sexual content on YouTube focusing on the Amharic language. Our findings reveal that -- although Amharic-speaking YouTube users find the platform crucial for several aspects of their lives -- participants reported unplanned exposure to policy-violating sexual content when searching for benign, popular queries. Furthermore, malicious content creators seem to exploit under-performing language technologies and content moderation to further target vulnerable groups of speakers, including migrant domestic workers, diaspora, and local Ethiopians. Overall, our study sheds light on how failures in low-resourced language technology may lead to exposure to harmful content and suggests implications for stakeholders in minimizing harm. Content Warning: This paper includes discussions of NSFW topics and harmful content (hate, abuse, sexual harassment, self-harm, misinformation). The authors do not support the creation or distribution of harmful content.",True,True,Hellina Hailu Nigatu and Inioluwa Deborah Raji,2024.0,,https://doi.org/10.1145/3630106.3658546,10.1145/3630106.3658546,,"""I Searched for a Religious Song in Amharic and Got Sexual Content Instead"": Investigating Online Harm in Low-Resourced Languages on YouTube",Studies using QualCoder,https://qualcoder.wordpress.com/studies-using-qualcoder/,“I Searched for a Religious Song in Amharic and Got Sexual Content Instead”: Investigating Online Harm in Low-Resourced Languages on YouTube. Hellina Hailu "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,destaw-etal-2021-development,\cite{destaw-etal-2021-development},The Development of Pre-processing Tools and Pre-trained Embedding Models for {A}mharic,,,True,False,"Belay, Tadesse Destaw and Ayele, Abinew and Yimam, Seid Muhie",2021.0,,https://aclanthology.org/2021.winlp-1.5/,,,The Development of Pre-processing Tools and Pre-trained Embedding Models for {A}mharic,(PDF) The Development of Pre-processing Tools and Pre-trained ...,https://www.academia.edu/86449740/The_Development_of_Pre_processing_Tools_and_Pre_trained_Embedding_Models_for_Amharic,"To study the impacts of homophone normalization, we develop different generalpurpose pre-trained embedding models for Amharic using regular and normalized" "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,yeshambel2021morphologically,\cite{yeshambel2021morphologically},Morphologically annotated {Amharic} text corpora,,,True,False,"Yeshambel, Tilahun and Mothe, Josiane and Assabie, Yaregal",2021.0,,,,,Morphologically annotated {Amharic} text corpora,[PDF] Morphologically Annotated Amharic Text Corpora,https://univ-tlse2.hal.science/hal-03362977/file/Morphologically.pdf,"This paper presents morphologically annotated Amharic lexicons as well as stem-based and root-based morphologically annotated corpora which could be used by the research community as benchmark collections either to evaluate morphological analyzers or information retrieval for Amharic. An Amharic native annotator morphologically segmented unique words from the corpus into their affixes and basic stems or roots. Morphological annotation process 4 5 6 7 8 Unique Word Extraction Stem-based and Root-based Lexicons Preprocessing Documents Annotated corpora Morphological Annotation Unique Words Resource Paper I SIGIR ’21, July 11–15, 2021, Virtual Event, Canada 2351 The stem-based and root-based morphological forms of each word were created manually by an annotator." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,azime2024enhancing,\cite{azime2024enhancing},"Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets",http://arxiv.org/abs/2402.08015v5,"Large language models (LLMs) have received a lot of attention in natural language processing (NLP) research because of their exceptional performance in understanding and generating human languages. However, low-resource languages are left behind due to the unavailability of resources. In this work, we focus on enhancing the LLaMA-2-Amharic model by integrating task-specific and generative datasets to improve language model performance for Amharic. We compile an Amharic instruction fine-tuning dataset and fine-tuned LLaMA-2-Amharic model. The fine-tuned model shows promising results in different NLP tasks. We open-source our dataset creation pipeline, instruction datasets, trained models, and evaluation outputs to promote language-specific studies on these models.",True,True,"Azime, Israel Abebe and Fuge, Mitiku Yohannes and Tonja, Atnafu Lambebo and Belay, Tadesse Destaw and Wassie, Aman Kassahun and Jada, Eyasu Shiferaw and Chanie, Yonas and Sewunetie, Walelign Tewabe and Yimam, Seid Muhie",2024.0,,,,arXiv preprint arXiv:2402.08015,"Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets",[PDF] Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific ...,https://aclanthology.org/2024.findings-emnlp.25.pdf,"In our research, we aim to improve the performance of the Amharic LLAMA model by integrating task-specific and generative datasets, as shown" "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,2AIRTC,\cite{2AIRTC},{2AIRTC}: The {Amharic} Adhoc Information Retrieval Test Collection,,,True,False,"Yeshambel, Tilahun and Mothe, Josiane and Assabie, Yaregal",2020.0,,https://doi.org/10.1007/978-3-030-58219-7_5,10.1007/978-3-030-58219-7_5,,{2AIRTC}: The {Amharic} Adhoc Information Retrieval Test Collection,2AIRTC: The Amharic Adhoc Information Retrieval Test Collection,https://link.springer.com/chapter/10.1007/978-3-030-58219-7_5,"2AIRTC: The Amharic Adhoc Information Retrieval Test Collection | SpringerLink 2AIRTC: The Amharic Adhoc Information Retrieval Test Collection 3. 3.NII Test Collection for Information Retrieval http://research.nii.ac.jp/ntcir. 5. 5.Forum for Information Retrieval Evaluation http://fire.irsi.res.in. * Argaw, A.A., Asker, L., Cöster, R., Karlgren, J., Sahlgren, M.: Dictionary-based Amharic-French information retrieval. * Kagolovsky, Y., Moehr, J.: Current status of the evaluation of information retrieval. * Yeshambel, T., Josiane, M., Assabie, Y.: Construction of Morpheme-based Amharic stopword list for information retrieval system. 2AIRTC: The Amharic Adhoc Information Retrieval Test Collection. 6. Argaw, A.A., Asker, L., Cöster, R., Karlgren, J., Sahlgren, M.: Dictionary-based Amharic-French information retrieval. Yeshambel, T., Josiane, M., Assabie, Y.: Construction of Morpheme-based Amharic stopword list for information retrieval system." "Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval",2505.19356v1,am_news_data,\cite{am_news_data},An Amharic News Text classification Dataset,http://arxiv.org/abs/2103.05639v1,"In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments.",True,True,"Azime, Israel Abebe and Mohammed, Nebil",2021.0,,,,arXiv preprint arXiv:2103.05639,An Amharic News Text classification Dataset,An Amharic News Text classification Dataset,http://arxiv.org/pdf/2103.05639v1,"In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/iclr/XiongXLTLBAO21,\cite{DBLP:conf/iclr/XiongXLTLBAO21},"Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval",http://arxiv.org/abs/2007.00808v2,"Conducting text retrieval in a dense learned representation space has many intriguing advantages over sparse retrieval. Yet the effectiveness of dense retrieval (DR) often requires combination with sparse retrieval. In this paper, we identify that the main bottleneck is in the training mechanisms, where the negative instances used in training are not representative of the irrelevant documents in testing. This paper presents Approximate nearest neighbor Negative Contrastive Estimation (ANCE), a training mechanism that constructs negatives from an Approximate Nearest Neighbor (ANN) index of the corpus, which is parallelly updated with the learning process to select more realistic negative training instances. This fundamentally resolves the discrepancy between the data distribution used in the training and testing of DR. In our experiments, ANCE boosts the BERT-Siamese DR model to outperform all competitive dense and sparse retrieval baselines. It nearly matches the accuracy of sparse-retrieval-and-BERT-reranking using dot-product in the ANCE-learned representation space and provides almost 100x speed-up.",True,True,"Lee Xiong and Chenyan Xiong and Ye Li and Kwok{-}Fung Tang and Jialin Liu and Paul N. Bennett and Junaid Ahmed and Arnold Overwijk",2021.0,,,,,"Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval",Approximate Nearest Neighbor Negative Contrastive Learning for...,https://openreview.net/forum?id=zeFrfgyZln,"This paper improves the learning of dense text retrieval using ANCE, which selects global negatives with bigger gradient norms using an asynchronously updated" "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/emnlp/KarpukhinOMLWEC20,\cite{DBLP:conf/emnlp/KarpukhinOMLWEC20},Dense Passage Retrieval for Open-Domain Question Answering,http://arxiv.org/abs/2004.04906v3,"Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.",True,True,"Vladimir Karpukhin and Barlas Oguz and Sewon Min and Patrick S. H. Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen{-}tau Yih",2020.0,,,,,Dense Passage Retrieval for Open-Domain Question Answering,[2004.04906] Dense Passage Retrieval for Open-Domain ...,https://arxiv.org/abs/2004.04906,"**arXiv:2004.04906** (cs) Authors:Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/acl/GaoC22,\cite{DBLP:conf/acl/GaoC22},"Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval",http://arxiv.org/abs/2108.05540v1,"Recent research demonstrates the effectiveness of using fine-tuned language models~(LM) for dense retrieval. However, dense retrievers are hard to train, typically requiring heavily engineered fine-tuning pipelines to realize their full potential. In this paper, we identify and address two underlying problems of dense retrievers: i)~fragility to training data noise and ii)~requiring large batches to robustly learn the embedding space. We use the recently proposed Condenser pre-training architecture, which learns to condense information into the dense vector through LM pre-training. On top of it, we propose coCondenser, which adds an unsupervised corpus-level contrastive loss to warm up the passage embedding space. Retrieval experiments on MS-MARCO, Natural Question, and Trivia QA datasets show that coCondenser removes the need for heavy data engineering such as augmentation, synthesis, or filtering, as well as the need for large batch training. It shows comparable performance to RocketQA, a state-of-the-art, heavily engineered system, using simple small batch fine-tuning.",True,True,"Luyu Gao and Jamie Callan",2022.0,,,,,"Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval",[PDF] Unsupervised Corpus Aware Language Model Pre-training for ...,https://aclanthology.org/2022.acl-long.203.pdf,"In this work, we use contrastive learning to do pre- training for dense retrieval. Different from earlier work, instead of individual" "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,lu-etal-2021-less,\cite{lu-etal-2021-less},{Less is More: Pretrain a Strong {S}iamese Encoder for Dense Text Retrieval Using a Weak Decode},,,True,False,"Lu, Shuqi and He, Di and Xiong, Chenyan and Ke, Guolin and Malik, Waleed and Dou, Zhicheng and Bennett, Paul and Liu, Tie-Yan and Overwijk, Arnold",2021.0,,,,,{Less is More: Pretrain a Strong {S}iamese Encoder for Dense Text Retrieval Using a Weak Decode},Less is More: Pretrain a Strong Siamese Encoder for Dense Text ...,https://aclanthology.org/2021.emnlp-main.220/,"We propose a new self-learning method that pre-trains the autoencoder using a weak decoder, with restricted capacity and attention flexibility." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/emnlp/XiaoLSC22,\cite{DBLP:conf/emnlp/XiaoLSC22},"RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder",http://arxiv.org/abs/2205.12035v2,"Despite pre-training's progress in many important NLP tasks, it remains to explore effective pre-training strategies for dense retrieval. In this paper, we propose RetroMAE, a new retrieval oriented pre-training paradigm based on Masked Auto-Encoder (MAE). RetroMAE is highlighted by three critical designs. 1) A novel MAE workflow, where the input sentence is polluted for encoder and decoder with different masks. The sentence embedding is generated from the encoder's masked input; then, the original sentence is recovered based on the sentence embedding and the decoder's masked input via masked language modeling. 2) Asymmetric model structure, with a full-scale BERT like transformer as encoder, and a one-layer transformer as decoder. 3) Asymmetric masking ratios, with a moderate ratio for encoder: 15~30%, and an aggressive ratio for decoder: 50~70%. Our framework is simple to realize and empirically competitive: the pre-trained models dramatically improve the SOTA performances on a wide range of dense retrieval benchmarks, like BEIR and MS MARCO. The source code and pre-trained models are made publicly available at https://github.com/staoxiao/RetroMAE so as to inspire more interesting research.",True,True,"Shitao Xiao and Zheng Liu and Yingxia Shao and Zhao Cao",2022.0,,,,,"RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder",[PDF] Pre-Training Retrieval-oriented Language Models Via Masked Auto ...,https://aclanthology.org/2022.emnlp-main.35.pdf,"RetroMAE is a retrieval-oriented pre-training model using Masked Auto-Encoder (MAE) with a novel workflow, asymmetric structure, and masking ratios." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/acl/LeeCT19,\cite{DBLP:conf/acl/LeeCT19},Latent Retrieval for Weakly Supervised Open Domain Question Answering,http://arxiv.org/abs/1906.00300v3,"Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.",True,True,"Kenton Lee and Ming{-}Wei Chang and Kristina Toutanova",2019.0,,,,,Latent Retrieval for Weakly Supervised Open Domain Question Answering,Latent Retrieval for Weakly Supervised Open Domain Question Answering,http://arxiv.org/pdf/1906.00300v3,"Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/sigir/MaGZFC22,\cite{DBLP:conf/sigir/MaGZFC22},"{Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction}",,,True,False,"Xinyu Ma and Jiafeng Guo and Ruqing Zhang and Yixing Fan and Xueqi Cheng",2022.0,,,,,"{Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction}",Pre-train a Discriminative Text Encoder for Dense Retrieval ...,https://www.researchgate.net/publication/360164354_Pre-train_a_Discriminative_Text_Encoder_for_Dense_Retrieval_via_Contrastive_Span_Prediction,"Therefore, in this work, we introduce a novel contrastive span prediction task to pre-train the encoder alone, but still retain the bottleneck ability of the" "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:journals/corr/abs-2401-11248,\cite{DBLP:journals/corr/abs-2401-11248},"Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval",http://arxiv.org/abs/2401.11248v2,"Masked auto-encoder pre-training has emerged as a prevalent technique for initializing and enhancing dense retrieval systems. It generally utilizes additional Transformer decoder blocks to provide sustainable supervision signals and compress contextual information into dense representations. However, the underlying reasons for the effectiveness of such a pre-training technique remain unclear. The usage of additional Transformer-based decoders also incurs significant computational costs. In this study, we aim to shed light on this issue by revealing that masked auto-encoder (MAE) pre-training with enhanced decoding significantly improves the term coverage of input tokens in dense representations, compared to vanilla BERT checkpoints. Building upon this observation, we propose a modification to the traditional MAE by replacing the decoder of a masked auto-encoder with a completely simplified Bag-of-Word prediction task. This modification enables the efficient compression of lexical signals into dense representations through unsupervised pre-training. Remarkably, our proposed method achieves state-of-the-art retrieval performance on several large-scale retrieval benchmarks without requiring any additional parameters, which provides a 67% training speed-up compared to standard masked auto-encoder pre-training with enhanced decoding.",True,True,"Guangyuan Ma and Xing Wu and Zijia Lin and Songlin Hu",2024.0,,,,ArXiv,"Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval",Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval,https://arxiv.org/abs/2401.11248,We propose a modification to the traditional MAE by replacing the decoder of a masked auto-encoder with a completely simplified Bag-of-Word prediction task. "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:journals/corr/abs-1904-08375,\cite{DBLP:journals/corr/abs-1904-08375},Document Expansion by Query Prediction,http://arxiv.org/abs/1904.08375v2,"One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content.From the perspective of a question answering system, this might comprise questions the document can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions with a vanilla sequence-to-sequence model, trained using datasets consisting of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without re-ranking) approach the effectiveness of more computationally expensive neural re-rankers but are much faster.",True,True,"Rodrigo Frassetto Nogueira and Wei Yang and Jimmy Lin and Kyunghyun Cho",2019.0,,,,ArXiv,Document Expansion by Query Prediction,Document Expansion by Query Prediction,http://arxiv.org/pdf/1904.08375v2,"One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content.From the perspective of a question answering system, this might comprise questions the document can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions with a vanilla sequence-to-sequence model, trained using datasets consisting of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without re-ranking) approach the effectiveness of more computationally expensive neural re-rankers but are much faster." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,nogueira2019doc2query,\cite{nogueira2019doc2query},From doc2query to docTTTTTquery,,,True,False,Rodrigo Nogueira and Jimmy Lin,2019.0,,,,,From doc2query to docTTTTTquery,[PDF] From doc2query to docTTTTTquery,https://www.semanticscholar.org/paper/From-doc2query-to-docTTTTTquery-Cheriton/54fa64b74ec020699fad989f85e74e50c7a34445,"The setup in this work follows doc2query, but with T5 as the expansion model, and it is found that the top-k sampling decoder produces more effective" "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/nips/VaswaniSPUJGKP17,\cite{DBLP:conf/nips/VaswaniSPUJGKP17},Attention Is All You Need,http://arxiv.org/abs/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",True,True,"Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit and Llion Jones and Aidan N. Gomez and Lukasz Kaiser and Illia Polosukhin",2017.0,,,,,Attention Is All You Need,Attention Is All You Need,http://arxiv.org/pdf/1706.03762v7,"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/ecir/GospodinovMM23,\cite{DBLP:conf/ecir/GospodinovMM23},Doc2Query-: When Less is More,,,True,False,"Mitko Gospodinov and Sean MacAvaney and Craig Macdonald",2023.0,,,,,Doc2Query-: When Less is More,[2301.03266] Doc2Query--: When Less is More - arXiv,https://arxiv.org/abs/2301.03266,"Image 2: arxiv logo>cs> arXiv:2301.03266 **arXiv:2301.03266** (cs) View a PDF of the paper titled Doc2Query--: When Less is More, by Mitko Gospodinov and 2 other authors Cite as:arXiv:2301.03266 [cs.IR] (or arXiv:2301.03266v3 [cs.IR] for this version) View a PDF of the paper titled Doc2Query--: When Less is More, by Mitko Gospodinov and 2 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:journals/corr/abs-2202-05144,\cite{DBLP:journals/corr/abs-2202-05144},"InPars: Data Augmentation for Information Retrieval using Large Language Models",http://arxiv.org/abs/2202.05144v1,"The information retrieval community has recently witnessed a revolution due to large pretrained transformer models. Another key ingredient for this revolution was the MS MARCO dataset, whose scale and diversity has enabled zero-shot transfer learning to various tasks. However, not all IR tasks and domains can benefit from one single dataset equally. Extensive research in various NLP tasks has shown that using domain-specific training data, as opposed to a general-purpose one, improves the performance of neural models. In this work, we harness the few-shot capabilities of large pretrained language models as synthetic data generators for IR tasks. We show that models finetuned solely on our unsupervised dataset outperform strong baselines such as BM25 as well as recently proposed self-supervised dense retrieval methods. Furthermore, retrievers finetuned on both supervised and our synthetic data achieve better zero-shot transfer than models finetuned only on supervised data. Code, models, and data are available at https://github.com/zetaalphavector/inpars .",True,True,"Luiz Henrique Bonifacio and Hugo Abonizio and Marzieh Fadaee and Rodrigo Frassetto Nogueira",2022.0,,,,ArXiv,"InPars: Data Augmentation for Information Retrieval using Large Language Models",InPars: Data Augmentation for Information Retrieval using Large ...,https://arxiv.org/abs/2202.05144,"In this work, we harness the few-shot capabilities of large pretrained language models as synthetic data generators for IR tasks." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:journals/corr/abs-2301-01820,\cite{DBLP:journals/corr/abs-2301-01820},"{InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval}",,,True,False,"Vitor Jeronymo and Luiz Henrique Bonifacio and Hugo Abonizio and Marzieh Fadaee and Roberto de Alencar Lotufo and Jakub Zavrel and Rodrigo Frassetto Nogueira",2023.0,,,,ArXiv,"{InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval}",(PDF) InPars-v2: Large Language Models as Efficient Dataset ...,https://www.researchgate.net/publication/366902520_InPars-v2_Large_Language_Models_as_Efficient_Dataset_Generators_for_Information_Retrieval,"(PDF) InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. We also made available all the synthetic data generated in this work for the 18 different datasets in the BEIR benchmark which took more than 2,000 GPU hours to be generated as well as the reranker models finetuned on the synthetic data." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:conf/iclr/DaiZMLNLBGHC23,\cite{DBLP:conf/iclr/DaiZMLNLBGHC23},Promptagator: Few-shot Dense Retrieval From 8 Examples,http://arxiv.org/abs/2209.11755v1,"Much recent research on information retrieval has focused on how to transfer from one task (typically with abundant supervised data) to various other tasks where supervision is limited, with the implicit assumption that it is possible to generalize from one task to all the rest. However, this overlooks the fact that there are many diverse and unique retrieval tasks, each targeting different search intents, queries, and search domains. In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples. To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data. Powered by LLM's generalization ability, Promptagator makes it possible to create task-specific end-to-end retrievers solely based on a few examples {without} using Natural Questions or MS MARCO to train %question generators or dual encoders. Surprisingly, LLM prompting with no more than 8 examples allows dual encoders to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by more than 1.2 nDCG on average on 11 retrieval sets. Further training standard-size re-rankers using the same generated data yields another 5.0 point nDCG improvement. Our studies determine that query generation can be far more effective than previously observed, especially when a small amount of task-specific knowledge is given.",True,True,"Zhuyun Dai and Vincent Y. Zhao and Ji Ma and Yi Luan and Jianmo Ni and Jing Lu and Anton Bakalov and Kelvin Guu and Keith B. Hall and Ming{-}Wei Chang",2023.0,,,,,Promptagator: Few-shot Dense Retrieval From 8 Examples,Promptagator: Few-shot Dense Retrieval From 8 Examples,https://openreview.net/forum?id=gmL46YMpu2J,"In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:journals/corr/abs-2403-20327,\cite{DBLP:journals/corr/abs-2403-20327},Gecko: Versatile Text Embeddings Distilled from Large Language Models,http://arxiv.org/abs/2403.20327v1,"We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a retriever. Our two-step distillation process begins with generating diverse, synthetic paired data using an LLM. Next, we further refine the data quality by retrieving a set of candidate passages for each query, and relabeling the positive and hard negative passages using the same LLM. The effectiveness of our approach is demonstrated by the compactness of the Gecko. On the Massive Text Embedding Benchmark (MTEB), Gecko with 256 embedding dimensions outperforms all existing entries with 768 embedding size. Gecko with 768 embedding dimensions achieves an average score of 66.31, competing with 7x larger models and 5x higher dimensional embeddings.",True,True,"Jinhyuk Lee and Zhuyun Dai and Xiaoqi Ren and Blair Chen and Daniel Cer and Jeremy R. Cole and Kai Hui and Michael Boratko and Rajvi Kapadia and Wen Ding and Yi Luan and Sai Meher Karthik Duddu and Gustavo Hern{\'{a}}ndez {\'{A}}brego and Weiqiang Shi and Nithi Gupta and Aditya Kusupati and Prateek Jain and Siddhartha Reddy Jonnalagadda and Ming{-}Wei Chang and Iftekhar Naim",2024.0,,,,ArXiv,Gecko: Versatile Text Embeddings Distilled from Large Language Models,Gecko: Versatile Text Embeddings Distilled from Large Language Models,http://arxiv.org/pdf/2403.20327v1,"We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a retriever. Our two-step distillation process begins with generating diverse, synthetic paired data using an LLM. Next, we further refine the data quality by retrieving a set of candidate passages for each query, and relabeling the positive and hard negative passages using the same LLM. The effectiveness of our approach is demonstrated by the compactness of the Gecko. On the Massive Text Embedding Benchmark (MTEB), Gecko with 256 embedding dimensions outperforms all existing entries with 768 embedding size. Gecko with 768 embedding dimensions achieves an average score of 66.31, competing with 7x larger models and 5x higher dimensional embeddings." "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:journals/corr/abs-2411-00722,\cite{DBLP:journals/corr/abs-2411-00722},Token-level Proximal Policy Optimization for Query Generation,http://arxiv.org/abs/2411.00722v1,"Query generation is a critical task for web search engines (e.g. Google, Bing) and recommendation systems. Recently, state-of-the-art query generation methods leverage Large Language Models (LLMs) for their strong capabilities in context understanding and text generation. However, they still face challenges in generating high-quality queries in terms of inferring user intent based on their web search interaction history. In this paper, we propose Token-level Proximal Policy Optimization (TPPO), a noval approach designed to empower LLMs perform better in query generation through fine-tuning. TPPO is based on the Reinforcement Learning from AI Feedback (RLAIF) paradigm, consisting of a token-level reward model and a token-level proximal policy optimization module to address the sparse reward challenge in traditional RLAIF frameworks. To evaluate the effectiveness and robustness of TPPO, we conducted experiments on both open-source dataset and an industrial dataset that was collected from a globally-used search engine. The experimental results demonstrate that TPPO significantly improves the performance of query generation for LLMs and outperforms its existing competitors.",True,True,"Yichen Ouyang and Lu Wang and Fangkai Yang and Pu Zhao and Chenghua Huang and Jianfeng Liu and Bochen Pang and Yaming Yang and Yuefeng Zhan and Hao Sun and Qingwei Lin and Saravan Rajmohan and Weiwei Deng and Dongmei Zhang and Feng Sun and Qi Zhang",2024.0,,,,ArXiv,Token-level Proximal Policy Optimization for Query Generation,Token-level Proximal Policy Optimization for Query Generation,https://www.researchgate.net/publication/385510091_Token-level_Proximal_Policy_Optimization_for_Query_Generation,"In this paper, we propose Token-level Proximal Policy Optimization (TPPO), a noval approach designed to empower LLMs perform better in query generation through" "Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization",2505.19307v1,DBLP:journals/corr/SchulmanWDRK17,\cite{DBLP:journals/corr/SchulmanWDRK17},Proximal Policy Optimization Algorithms,http://arxiv.org/abs/1707.06347v2,"We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a ""surrogate"" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.",True,True,"John Schulman and Filip Wolski and Prafulla Dhariwal and Alec Radford and Oleg Klimov",2017.0,,,,ArXiv,Proximal Policy Optimization Algorithms,Proximal Policy Optimization Algorithms,http://arxiv.org/pdf/1707.06347v2,"We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a ""surrogate"" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time." "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,DEKR,\cite{DEKR},"{DEKR:} Description Enhanced Knowledge Graph for Machine Learning Method Recommendation",,,True,False,"Xianshuai Cao and Yuliang Shi and Han Yu and Jihu Wang and Xinjun Wang and Zhongmin Yan and Zhiyong Chen",2021.0,,https://doi.org/10.1145/3404835.3462900,10.1145/3404835.3462900,,"{DEKR:} Description Enhanced Knowledge Graph for Machine Learning Method Recommendation",Description Enhanced Knowledge Graph for Machine Learning ...,https://www.researchgate.net/publication/353188658_DEKR_Description_Enhanced_Knowledge_Graph_for_Machine_Learning_Method_Recommendation,"To further improve the performance of machine learning method recommendation, cross-modal knowledge graph contrastive learning (Cao et al., 2022) maximized the" "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,tse23,\cite{tse23},"Task-Oriented {ML/DL} Library Recommendation Based on a Knowledge Graph",,,True,False,"Mingwei Liu and Chengyuan Zhao and Xin Peng and Simin Yu and Haofen Wang and Chaofeng Sha",2023.0,,https://doi.org/10.1109/TSE.2023.3285280,10.1109/TSE.2023.3285280,{IEEE} Trans. Software Eng.,"Task-Oriented {ML/DL} Library Recommendation Based on a Knowledge Graph",Task-Oriented ML/DL Library Recommendation Based on ...,https://www.researchgate.net/publication/371549606_Task-Oriented_MLDL_Library_Recommendation_based_on_a_Knowledge_Graph,"AI applications often use ML/DL (Machine Learning/Deep Learning) models to implement specific AI tasks. As application developers usually are not AI experts, they often choose to integrate existing implementations of ML/DL models as libraries for their AI tasks. It constructs a knowledge graph that captures AI tasks, ML/DL models, model implementations, repositories, and their relationships by extracting knowledge from different sources such as ML/DL resource websites, papers, ML/DL frameworks, and repositories. Based on the knowledge graph, MLTaskKG recommends ML/DL libraries for developers by matching their requirements on tasks, model characteristics, and implementation information. Abstract—AI applications often use ML/DL(Machine Learning/Deep Learning)models to implement specific AI tasks.As application a knowledge graph that captures AI tasks,ML/DL models,model implementations,repositories,and their relationships b y extracting" "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,OAGBench,\cite{OAGBench},OAG-Bench: {A} Human-Curated Benchmark for Academic Graph Mining,,,True,False,"Fanjin Zhang and Shijie Shi and Yifan Zhu and Bo Chen and Yukuo Cen and Jifan Yu and Yelin Chen and Lulu Wang and Qingfei Zhao and Yuqing Cheng and Tianyi Han and Yuwei An and Dan Zhang and Weng Lam Tam and Kun Cao and Yunhe Pang and Xinyu Guan and Huihui Yuan and Jian Song and Xiaoyan Li and Yuxiao Dong and Jie Tang",2024.0,,https://doi.org/10.1145/3637528.3672354,10.1145/3637528.3672354,,OAG-Bench: {A} Human-Curated Benchmark for Academic Graph Mining,[PDF] A Human-Curated Benchmark for Academic Graph Mining - arXiv,https://arxiv.org/pdf/2402.15810,"OAG-Bench is a comprehensive, human-curated benchmark for academic graph mining, based on the Open Academic Graph, covering 10 tasks, 20 datasets, and 70+" "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,paper2repo,\cite{paper2repo},paper2repo: GitHub Repository Recommendation for Academic Papers,http://arxiv.org/abs/2004.06059v1,"GitHub has become a popular social application platform, where a large number of users post their open source projects. In particular, an increasing number of researchers release repositories of source code related to their research papers in order to attract more people to follow their work. Motivated by this trend, we describe a novel item-item cross-platform recommender system, $\textit{paper2repo}$, that recommends relevant repositories on GitHub that match a given paper in an academic search system such as Microsoft Academic. The key challenge is to identify the similarity between an input paper and its related repositories across the two platforms, $\textit{without the benefit of human labeling}$. Towards that end, paper2repo integrates text encoding and constrained graph convolutional networks (GCN) to automatically learn and map the embeddings of papers and repositories into the same space, where proximity offers the basis for recommendation. To make our method more practical in real life systems, labels used for model training are computed automatically from features of user actions on GitHub. In machine learning, such automatic labeling is often called {\em distant supervision\/}. To the authors' knowledge, this is the first distant-supervised cross-platform (paper to repository) matching system. We evaluate the performance of paper2repo on real-world data sets collected from GitHub and Microsoft Academic. Results demonstrate that it outperforms other state of the art recommendation methods.",True,True,"Huajie Shao and Dachun Sun and Jiahao Wu and Zecheng Zhang and Aston Zhang and Shuochao Yao and Shengzhong Liu and Tianshi Wang and Chao Zhang and Tarek F. Abdelzaher",2020.0,,https://doi.org/10.1145/3366423.3380145,10.1145/3366423.3380145,,paper2repo: GitHub Repository Recommendation for Academic Papers,paper2repo: GitHub Repository Recommendation for Academic Papers,http://arxiv.org/pdf/2004.06059v1,"GitHub has become a popular social application platform, where a large number of users post their open source projects. In particular, an increasing number of researchers release repositories of source code related to their research papers in order to attract more people to follow their work. Motivated by this trend, we describe a novel item-item cross-platform recommender system, $\textit{paper2repo}$, that recommends relevant repositories on GitHub that match a given paper in an academic search system such as Microsoft Academic. The key challenge is to identify the similarity between an input paper and its related repositories across the two platforms, $\textit{without the benefit of human labeling}$. Towards that end, paper2repo integrates text encoding and constrained graph convolutional networks (GCN) to automatically learn and map the embeddings of papers and repositories into the same space, where proximity offers the basis for recommendation. To make our method more practical in real life systems, labels used for model training are computed automatically from features of user actions on GitHub. In machine learning, such automatic labeling is often called {\em distant supervision\/}. To the authors' knowledge, this is the first distant-supervised cross-platform (paper to repository) matching system. We evaluate the performance of paper2repo on real-world data sets collected from GitHub and Microsoft Academic. Results demonstrate that it outperforms other state of the art recommendation methods." "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,RepoRecommendation,\cite{RepoRecommendation},"Personalized Repository Recommendation Service for Developers with Multi-modal Features Learning",,,True,False,"Yueshen Xu and Yuhong Jiang and Xinkui Zhao and Ying Li and Rui Li",2023.0,,https://doi.org/10.1109/ICWS60048.2023.00064,10.1109/ICWS60048.2023.00064,,"Personalized Repository Recommendation Service for Developers with Multi-modal Features Learning",AIDC-AI/Awesome-Unified-Multimodal-Models,https://github.com/AIDC-AI/Awesome-Unified-Multimodal-Models,"| ANOLE | ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text GenerationImage 11: GitHub Repo stars | arXiv | 2024/07/08 | Github | - | | MM-Interleaved | MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature SynchronizerImage 20: GitHub Repo stars | arXiv | 2024/01/18 | Github | - | | Nexus-Gen | Nexus-Gen: A Unified Model for Image Understanding, Generation, and EditingImage 27: GitHub Repo stars | arXiv | 2025/04/30 | Github | Demo | | VARGPT | VARGPT: Unified Understanding and Generation in a Visual Autoregressive Multimodal Large Language ModelImage 38: GitHub Repo stars | arXiv | 2025/01/21 | Github | - |" "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,GRETA,\cite{GRETA},{GRETA:} Graph-Based Tag Assignment for GitHub Repositories,,,True,False,"Xuyang Cai and Jiangang Zhu and Beijun Shen and Yuting Chen",2016.0,,https://doi.org/10.1109/COMPSAC.2016.124,10.1109/COMPSAC.2016.124,,{GRETA:} Graph-Based Tag Assignment for GitHub Repositories,GRETA: Graph-Based Tag Assignment for GitHub Repositories,https://ieeexplore.ieee.org/iel7/7551592/7551973/07551994.pdf,"GRETA is a novel, graph-based approach to tag assignment for repositories on GitHub, which allows tags to be assigned by some graph algorithms. GRETA is also a" "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,EASE24,\cite{EASE24},"Automated categorization of pre-trained models for software engineering: A case study with a Hugging Face dataset",http://arxiv.org/abs/2405.13185v1,"Software engineering (SE) activities have been revolutionized by the advent of pre-trained models (PTMs), defined as large machine learning (ML) models that can be fine-tuned to perform specific SE tasks. However, users with limited expertise may need help to select the appropriate model for their current task. To tackle the issue, the Hugging Face (HF) platform simplifies the use of PTMs by collecting, storing, and curating several models. Nevertheless, the platform currently lacks a comprehensive categorization of PTMs designed specifically for SE, i.e., the existing tags are more suited to generic ML categories. This paper introduces an approach to address this gap by enabling the automatic classification of PTMs for SE tasks. First, we utilize a public dump of HF to extract PTMs information, including model documentation and associated tags. Then, we employ a semi-automated method to identify SE tasks and their corresponding PTMs from existing literature. The approach involves creating an initial mapping between HF tags and specific SE tasks, using a similarity-based strategy to identify PTMs with relevant tags. The evaluation shows that model cards are informative enough to classify PTMs considering the pipeline tag. Moreover, we provide a mapping between SE tasks and stored PTMs by relying on model names.",True,True,"Claudio Di Sipio and Riccardo Rubei and Juri Di Rocco and Davide Di Ruscio and Phuong T. Nguyen",2024.0,,https://doi.org/10.1145/3661167.3661215,10.1145/3661167.3661215,,"Automated categorization of pre-trained models for software engineering: A case study with a Hugging Face dataset",Automated categorization of pre-trained models for software ... - arXiv,https://arxiv.org/abs/2405.13185,"To tackle the issue, the Hugging Face (HF) platform simplifies the use of PTMs by collecting, storing, and curating several models. Nevertheless" "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,ESEM24,\cite{ESEM24},"Automatic Categorization of GitHub Actions with Transformers and Few-shot Learning",http://arxiv.org/abs/2407.16946v1,"In the GitHub ecosystem, workflows are used as an effective means to automate development tasks and to set up a Continuous Integration and Delivery (CI/CD pipeline). GitHub Actions (GHA) have been conceived to provide developers with a practical tool to create and maintain workflows, avoiding reinventing the wheel and cluttering the workflow with shell commands. Properly leveraging the power of GitHub Actions can facilitate the development processes, enhance collaboration, and significantly impact project outcomes. To expose actions to search engines, GitHub allows developers to assign them to one or more categories manually. These are used as an effective means to group actions sharing similar functionality. Nevertheless, while providing a practical way to execute workflows, many actions have unclear purposes, and sometimes they are not categorized. In this work, we bridge such a gap by conceptualizing Gavel, a practical solution to increasing the visibility of actions in GitHub. By leveraging the content of README.MD files for each action, we use Transformer--a deep learning algorithm--to assign suitable categories to the action. We conducted an empirical investigation and compared Gavel with a state-of-the-art baseline. The experimental results show that our proposed approach can assign categories to GitHub actions effectively, thus outperforming the state-of-the-art baseline.",True,True,"Phuong T. Nguyen and Juri Di Rocco and Claudio Di Sipio and Mudita Shakya and Davide Di Ruscio and Massimiliano Di Penta",2024.0,,https://doi.org/10.1145/3674805.3690752,10.1145/3674805.3690752,,"Automatic Categorization of GitHub Actions with Transformers and Few-shot Learning",Automatic Categorization of GitHub Actions with Transformers and ...,https://arxiv.org/html/2407.16946v1,a GitHub actions visibility elevator based on transformers and few-shot learning to make actions more visible and accessible to developers. "Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph",2505.17507v1,issue-PR-link-prediction,\cite{issue-PR-link-prediction},"Improving Issue-PR Link Prediction via Knowledge-Aware Heterogeneous Graph Learning",,,True,False,"Shuotong Bai and Huaxiao Liu and Enyan Dai and Lei Liu",2024.0,,https://doi.org/10.1109/TSE.2024.3408448,10.1109/TSE.2024.3408448,{IEEE} Trans. Software Eng.,"Improving Issue-PR Link Prediction via Knowledge-Aware Heterogeneous Graph Learning",Improving Issue-PR Link Prediction via Knowledge-Aware ...,https://www.researchgate.net/publication/381145630_Improving_Issue-PR_Link_Prediction_via_Knowledge-aware_Heterogeneous_Graph_Learning,"This method combines vector similarity, clustering techniques, and a deep learning model to improve the recommendation process. Additionally, Bai et al. [11]" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,kharitonov2019federated,\cite{kharitonov2019federated},Federated online learning to rank with evolution strategies,,,True,False,"Kharitonov, Eugene",2019.0,,,,,Federated online learning to rank with evolution strategies,Federated Online Learning to Rank with Evolution Strategies,https://arvinzhuang.github.io/publication/ECIR2021FOLTR,"Online Learning to Rank (OLTR) optimizes ranking models using implicit users' feedback, such as clicks, directly manipulating search engine results in" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,wang2021federated,\cite{wang2021federated},Federated online learning to rank with evolution strategies: a reproducibility study,,,True,False,"Wang, Shuyi and Zhuang, Shengyao and Zuccon, Guido",2021.0,,,,,Federated online learning to rank with evolution strategies: a reproducibility study,Federated Online Learning to Rank with Evolution Strategies,https://arvinzhuang.github.io/publication/ECIR2021FOLTR,"Abstract. Online Learning to Rank (OLTR) optimizes ranking models using implicit users' feedback, such as clicks, directly manipulating search engine results in" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,wang2021effective,\cite{wang2021effective},Effective and privacy-preserving federated online learning to rank,,,True,False,"Wang, Shuyi and Liu, Bing and Zhuang, Shengyao and Zuccon, Guido",2021.0,,,,,Effective and privacy-preserving federated online learning to rank,Effective and Privacy-preserving Federated Online Learning to Rank,https://dl.acm.org/doi/10.1145/3471158.3472236,"Empirical evaluation shows FPDGD significantly outperforms the only other federated OLTR method. In addition, FPDGD is more robust across different privacy" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,oosterhuis2018differentiable,\cite{oosterhuis2018differentiable},Differentiable Unbiased Online Learning to Rank,http://arxiv.org/abs/1809.08415v1,"Online Learning to Rank (OLTR) methods optimize rankers based on user interactions. State-of-the-art OLTR methods are built specifically for linear models. Their approaches do not extend well to non-linear models such as neural networks. We introduce an entirely novel approach to OLTR that constructs a weighted differentiable pairwise loss after each interaction: Pairwise Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional approach that relies on interleaving or multileaving and extensive sampling of models to estimate gradients. Instead, its gradient is based on inferring preferences between document pairs from user clicks and can optimize any differentiable model. We prove that the gradient of PDGD is unbiased w.r.t. user document pair preferences. Our experiments on the largest publicly available Learning to Rank (LTR) datasets show considerable and significant improvements under all levels of interaction noise. PDGD outperforms existing OLTR methods both in terms of learning speed as well as final convergence. Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear models to be optimized effectively. Our results show that using a neural network leads to even better performance at convergence than a linear model. In summary, PDGD is an efficient and unbiased OLTR approach that provides a better user experience than previously possible.",True,True,"Oosterhuis, Harrie and de Rijke, Maarten",2018.0,,,,,Differentiable Unbiased Online Learning to Rank,Differentiable Unbiased Online Learning to Rank,http://arxiv.org/pdf/1809.08415v1,"Online Learning to Rank (OLTR) methods optimize rankers based on user interactions. State-of-the-art OLTR methods are built specifically for linear models. Their approaches do not extend well to non-linear models such as neural networks. We introduce an entirely novel approach to OLTR that constructs a weighted differentiable pairwise loss after each interaction: Pairwise Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional approach that relies on interleaving or multileaving and extensive sampling of models to estimate gradients. Instead, its gradient is based on inferring preferences between document pairs from user clicks and can optimize any differentiable model. We prove that the gradient of PDGD is unbiased w.r.t. user document pair preferences. Our experiments on the largest publicly available Learning to Rank (LTR) datasets show considerable and significant improvements under all levels of interaction noise. PDGD outperforms existing OLTR methods both in terms of learning speed as well as final convergence. Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear models to be optimized effectively. Our results show that using a neural network leads to even better performance at convergence than a linear model. In summary, PDGD is an efficient and unbiased OLTR approach that provides a better user experience than previously possible." "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,wang2022non,\cite{wang2022non},Is Non-IID Data a Threat in Federated Online Learning to Rank?,http://arxiv.org/abs/2204.09272v2,"In this perspective paper we study the effect of non independent and identically distributed (non-IID) data on federated online learning to rank (FOLTR) and chart directions for future work in this new and largely unexplored research area of Information Retrieval. In the FOLTR process, clients participate in a federation to jointly create an effective ranker from the implicit click signal originating in each client, without the need to share data (documents, queries, clicks). A well-known factor that affects the performance of federated learning systems, and that poses serious challenges to these approaches, is that there may be some type of bias in the way data is distributed across clients. While FOLTR systems are on their own rights a type of federated learning system, the presence and effect of non-IID data in FOLTR has not been studied. To this aim, we first enumerate possible data distribution settings that may showcase data bias across clients and thus give rise to the non-IID problem. Then, we study the impact of each setting on the performance of the current state-of-the-art FOLTR approach, the Federated Pairwise Differentiable Gradient Descent (FPDGD), and we highlight which data distributions may pose a problem for FOLTR methods. We also explore how common approaches proposed in the federated learning literature address non-IID issues in FOLTR. This allows us to unveil new research gaps that, we argue, future research in FOLTR should consider. This is an important contribution to the current state of FOLTR field because, for FOLTR systems to be deployed, the factors affecting their performance, including the impact of non-IID data, need to be thoroughly understood.",True,True,"Wang, Shuyi and Zuccon, Guido",2022.0,,,,,Is Non-IID Data a Threat in Federated Online Learning to Rank?,Is Non-IID Data a Threat in Federated Online Learning to Rank?,https://scispace.com/pdf/is-non-iid-data-a-threat-in-federated-online-learning-to-1hxia4ua.pdf,ABSTRACT. In this perspective paper we study the effect of non independent and identically distributed (non-IID) data on federated online learn- ing to rank "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,wang2023analysis,\cite{wang2023analysis},"An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems",http://arxiv.org/abs/2307.01565v1,"Federated online learning to rank (FOLTR) aims to preserve user privacy by not sharing their searchable data and search interactions, while guaranteeing high search effectiveness, especially in contexts where individual users have scarce training data and interactions. For this, FOLTR trains learning to rank models in an online manner -- i.e. by exploiting users' interactions with the search systems (queries, clicks), rather than labels -- and federatively -- i.e. by not aggregating interaction data in a central server for training purposes, but by training instances of a model on each user device on their own private data, and then sharing the model updates, not the data, across a set of users that have formed the federation. Existing FOLTR methods build upon advances in federated learning. While federated learning methods have been shown effective at training machine learning models in a distributed way without the need of data sharing, they can be susceptible to attacks that target either the system's security or its overall effectiveness. In this paper, we consider attacks on FOLTR systems that aim to compromise their search effectiveness. Within this scope, we experiment with and analyse data and model poisoning attack methods to showcase their impact on FOLTR search effectiveness. We also explore the effectiveness of defense methods designed to counteract attacks on FOLTR systems. We contribute an understanding of the effect of attack and defense methods for FOLTR systems, as well as identifying the key factors influencing their effectiveness.",True,True,"Wang, Shuyi and Zuccon, Guido",2023.0,,,,,"An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems",An Analysis of Untargeted Poisoning Attack and Defense Methods ...,https://www.researchgate.net/publication/372136881_An_Analysis_of_Untargeted_Poisoning_Attack_and_Defense_Methods_for_Federated_Online_Learning_to_Rank_Systems,"Within this scope, we experiment with and analyse data and model poisoning attack methods to showcase their impact on FOLTR search effectiveness. We also" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,jia2022learning,\cite{jia2022learning},Learning Neural Ranking Models Online from Implicit User Feedback,http://arxiv.org/abs/2201.06658v1,"Existing online learning to rank (OL2R) solutions are limited to linear models, which are incompetent to capture possible non-linear relations between queries and documents. In this work, to unleash the power of representation learning in OL2R, we propose to directly learn a neural ranking model from users' implicit feedback (e.g., clicks) collected on the fly. We focus on RankNet and LambdaRank, due to their great empirical success and wide adoption in offline settings, and control the notorious explore-exploit trade-off based on the convergence analysis of neural networks using neural tangent kernel. Specifically, in each round of result serving, exploration is only performed on document pairs where the predicted rank order between the two documents is uncertain; otherwise, the ranker's predicted order will be followed in result ranking. We prove that under standard assumptions our OL2R solution achieves a gap-dependent upper regret bound of $O(\log^2(T))$, in which the regret is defined on the total number of mis-ordered pairs over $T$ rounds. Comparisons against an extensive set of state-of-the-art OL2R baselines on two public learning to rank benchmark datasets demonstrate the effectiveness of the proposed solution.",True,True,"Jia, Yiling and Wang, Hongning",2022.0,,,,,Learning Neural Ranking Models Online from Implicit User Feedback,Learning Neural Ranking Models Online from Implicit User Feedback,http://arxiv.org/pdf/2201.06658v1,"Existing online learning to rank (OL2R) solutions are limited to linear models, which are incompetent to capture possible non-linear relations between queries and documents. In this work, to unleash the power of representation learning in OL2R, we propose to directly learn a neural ranking model from users' implicit feedback (e.g., clicks) collected on the fly. We focus on RankNet and LambdaRank, due to their great empirical success and wide adoption in offline settings, and control the notorious explore-exploit trade-off based on the convergence analysis of neural networks using neural tangent kernel. Specifically, in each round of result serving, exploration is only performed on document pairs where the predicted rank order between the two documents is uncertain; otherwise, the ranker's predicted order will be followed in result ranking. We prove that under standard assumptions our OL2R solution achieves a gap-dependent upper regret bound of $O(\log^2(T))$, in which the regret is defined on the total number of mis-ordered pairs over $T$ rounds. Comparisons against an extensive set of state-of-the-art OL2R baselines on two public learning to rank benchmark datasets demonstrate the effectiveness of the proposed solution." "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,wang2018efficient,\cite{wang2018efficient},Efficient Exploration of Gradient Space for Online Learning to Rank,http://arxiv.org/abs/1805.07317v1,"Online learning to rank (OL2R) optimizes the utility of returned search results based on implicit feedback gathered directly from users. To improve the estimates, OL2R algorithms examine one or more exploratory gradient directions and update the current ranker if a proposed one is preferred by users via an interleaved test. In this paper, we accelerate the online learning process by efficient exploration in the gradient space. Our algorithm, named as Null Space Gradient Descent, reduces the exploration space to only the \emph{null space} of recent poorly performing gradients. This prevents the algorithm from repeatedly exploring directions that have been discouraged by the most recent interactions with users. To improve sensitivity of the resulting interleaved test, we selectively construct candidate rankers to maximize the chance that they can be differentiated by candidate ranking documents in the current query; and we use historically difficult queries to identify the best ranker when tie occurs in comparing the rankers. Extensive experimental comparisons with the state-of-the-art OL2R algorithms on several public benchmarks confirmed the effectiveness of our proposal algorithm, especially in its fast learning convergence and promising ranking quality at an early stage.",True,True,"Wang, Huazheng and Langley, Ramsey and Kim, Sonwoo and McCord-Snook, Eric and Wang, Hongning",2018.0,,,,,Efficient Exploration of Gradient Space for Online Learning to Rank,Efficient Exploration of Gradient Space for Online Learning to Rank,http://arxiv.org/pdf/1805.07317v1,"Online learning to rank (OL2R) optimizes the utility of returned search results based on implicit feedback gathered directly from users. To improve the estimates, OL2R algorithms examine one or more exploratory gradient directions and update the current ranker if a proposed one is preferred by users via an interleaved test. In this paper, we accelerate the online learning process by efficient exploration in the gradient space. Our algorithm, named as Null Space Gradient Descent, reduces the exploration space to only the \emph{null space} of recent poorly performing gradients. This prevents the algorithm from repeatedly exploring directions that have been discouraged by the most recent interactions with users. To improve sensitivity of the resulting interleaved test, we selectively construct candidate rankers to maximize the chance that they can be differentiated by candidate ranking documents in the current query; and we use historically difficult queries to identify the best ranker when tie occurs in comparing the rankers. Extensive experimental comparisons with the state-of-the-art OL2R algorithms on several public benchmarks confirmed the effectiveness of our proposal algorithm, especially in its fast learning convergence and promising ranking quality at an early stage." "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,liu2021federaser,\cite{liu2021federaser},Federaser: Enabling efficient client-level data removal from federated learning models,,,True,False,"Liu, Gaoyang and Ma, Xiaoqiang and Yang, Yang and Wang, Chen and Liu, Jiangchuan",2021.0,,,,,Federaser: Enabling efficient client-level data removal from federated learning models,FedEraser: Enabling Efficient Client-Level Data Removal ...,https://www.semanticscholar.org/paper/FedEraser%3A-Enabling-Efficient-Client-Level-Data-Liu-Ma/eadeffdec9fac8fd7f9aea732ca410eb082b7dcf,"FedEraser is presented, the first federated unlearning method-ology that can eliminate the influence of a federated client's data on the global FL model" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,wu2022federated,\cite{wu2022federated},Federated Unlearning with Knowledge Distillation,http://arxiv.org/abs/2201.09441v1,"Federated Learning (FL) is designed to protect the data privacy of each client during the training process by transmitting only models instead of the original data. However, the trained model may memorize certain information about the training data. With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client. We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model's performance without using any data from the clients. This method does not have any restrictions on the type of neural networks and does not rely on clients' participation, so it is practical and efficient in the FL system. We further introduce backdoor attacks in the training process to help evaluate the unlearning effect. Experiments on three canonical datasets demonstrate the effectiveness and efficiency of our method.",True,True,"Wu, Chen and Zhu, Sencun and Mitra, Prasenjit",2022.0,,,,arXiv preprint arXiv:2201.09441,Federated Unlearning with Knowledge Distillation,Federated Unlearning with Knowledge Distillation,http://arxiv.org/pdf/2201.09441v1,"Federated Learning (FL) is designed to protect the data privacy of each client during the training process by transmitting only models instead of the original data. However, the trained model may memorize certain information about the training data. With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client. We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model's performance without using any data from the clients. This method does not have any restrictions on the type of neural networks and does not rely on clients' participation, so it is practical and efficient in the FL system. We further introduce backdoor attacks in the training process to help evaluate the unlearning effect. Experiments on three canonical datasets demonstrate the effectiveness and efficiency of our method." "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,liu2022right,\cite{liu2022right},"The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining",http://arxiv.org/abs/2203.07320v1,"In Machine Learning, the emergence of \textit{the right to be forgotten} gave birth to a paradigm named \textit{machine unlearning}, which enables data holders to proactively erase their data from a trained model. Existing machine unlearning techniques focus on centralized training, where access to all holders' training data is a must for the server to conduct the unlearning process. It remains largely underexplored about how to achieve unlearning when full access to all training data becomes unavailable. One noteworthy example is Federated Learning (FL), where each participating data holder trains locally, without sharing their training data to the central server. In this paper, we investigate the problem of machine unlearning in FL systems. We start with a formal definition of the unlearning problem in FL and propose a rapid retraining approach to fully erase data samples from a trained FL model. The resulting design allows data holders to jointly conduct the unlearning process efficiently while keeping their training data locally. Our formal convergence and complexity analysis demonstrate that our design can preserve model utility with high efficiency. Extensive evaluations on four real-world datasets illustrate the effectiveness and performance of our proposed realization.",True,True,"Liu, Yi and Xu, Lei and Yuan, Xingliang and Wang, Cong and Li, Bo",2022.0,,,,,"The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining",The Right to be Forgotten in Federated Learning: An Efficient ...,https://ieeexplore.ieee.org/iel7/9796607/9796652/09796721.pdf,"This paper proposes a rapid retraining approach in Federated Learning to erase data samples, using a distributed Newton-type model update algorithm." "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,halimi2022federated,\cite{halimi2022federated},Federated Unlearning: How to Efficiently Erase a Client in FL?,http://arxiv.org/abs/2207.05521v3,"With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data. However, existing unlearning methods in the machine learning context can not be directly applied in the context of distributed settings like federated learning due to the differences in learning protocol and the presence of multiple actors. In this paper, we tackle the problem of federated unlearning for the case of erasing a client by removing the influence of their entire local data from the trained global model. To erase a client, we propose to first perform local unlearning at the client to be erased, and then use the locally unlearned model as the initialization to run very few rounds of federated learning between the server and the remaining clients to obtain the unlearned global model. We empirically evaluate our unlearning method by employing multiple performance measures on three datasets, and demonstrate that our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch, while being significantly efficient. Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients.",True,True,"Halimi, Anisa and Kadhe, Swanand and Rawat, Ambrish and Baracaldo, Nathalie",2022.0,,,,arXiv preprint arXiv:2207.05521,Federated Unlearning: How to Efficiently Erase a Client in FL?,Federated Unlearning: How to Efficiently Erase a Client in FL?,http://arxiv.org/pdf/2207.05521v3,"With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data. However, existing unlearning methods in the machine learning context can not be directly applied in the context of distributed settings like federated learning due to the differences in learning protocol and the presence of multiple actors. In this paper, we tackle the problem of federated unlearning for the case of erasing a client by removing the influence of their entire local data from the trained global model. To erase a client, we propose to first perform local unlearning at the client to be erased, and then use the locally unlearned model as the initialization to run very few rounds of federated learning between the server and the remaining clients to obtain the unlearned global model. We empirically evaluate our unlearning method by employing multiple performance measures on three datasets, and demonstrate that our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch, while being significantly efficient. Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients." "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,yuan2023federated,\cite{yuan2023federated},Federated Unlearning for On-Device Recommendation,http://arxiv.org/abs/2210.10958v2,"The increasing data privacy concerns in recommendation systems have made federated recommendations (FedRecs) attract more and more attention. Existing FedRecs mainly focus on how to effectively and securely learn personal interests and preferences from their on-device interaction data. Still, none of them considers how to efficiently erase a user's contribution to the federated training process. We argue that such a dual setting is necessary. First, from the privacy protection perspective, ``the right to be forgotten'' requires that users have the right to withdraw their data contributions. Without the reversible ability, FedRecs risk breaking data protection regulations. On the other hand, enabling a FedRec to forget specific users can improve its robustness and resistance to malicious clients' attacks. To support user unlearning in FedRecs, we propose an efficient unlearning method FRU (Federated Recommendation Unlearning), inspired by the log-based rollback mechanism of transactions in database management systems. It removes a user's contribution by rolling back and calibrating the historical parameter updates and then uses these updates to speed up federated recommender reconstruction. However, storing all historical parameter updates on resource-constrained personal devices is challenging and even infeasible. In light of this challenge, we propose a small-sized negative sampling method to reduce the number of item embedding updates and an importance-based update selection mechanism to store only important model updates. To evaluate the effectiveness of FRU, we propose an attack method to disturb FedRecs via a group of compromised users and use FRU to recover recommenders by eliminating these users' influence. Finally, we conduct experiments on two real-world recommendation datasets with two widely used FedRecs to show the efficiency and effectiveness of our proposed approaches.",True,True,"Yuan, Wei and Yin, Hongzhi and Wu, Fangzhao and Zhang, Shijie and He, Tieke and Wang, Hao",2023.0,,,,,Federated Unlearning for On-Device Recommendation,Federated Unlearning for On-Device Recommendation,https://dl.acm.org/doi/10.1145/3539597.3570463,"To support user unlearning in federated recommendation systems, we propose an efficient unlearning method FRU (Federated Recommendation Unlearning), inspired by" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,zhu2023heterogeneous,\cite{zhu2023heterogeneous},"Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning",http://arxiv.org/abs/2302.02069v2,"Federated Learning (FL) recently emerges as a paradigm to train a global machine learning model across distributed clients without sharing raw data. Knowledge Graph (KG) embedding represents KGs in a continuous vector space, serving as the backbone of many knowledge-driven applications. As a promising combination, federated KG embedding can fully take advantage of knowledge learned from different clients while preserving the privacy of local data. However, realistic problems such as data heterogeneity and knowledge forgetting still remain to be concerned. In this paper, we propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning. To cope with the drift between local optimization and global convergence caused by data heterogeneity, we propose mutual knowledge distillation to transfer local knowledge to global, and absorb global knowledge back. Moreover, we present an unlearning method based on cognitive neuroscience, which combines retroactive interference and passive decay to erase specific knowledge from local clients and propagate to the global model by reusing knowledge distillation. We construct new datasets for assessing realistic performance of the state-of-the-arts. Extensive experiments show that FedLU achieves superior results in both link prediction and knowledge forgetting.",True,True,"Zhu, Xiangrong and Li, Guangyao and Hu, Wei",2023.0,,,,,"Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning",Heterogeneous Federated Knowledge Graph Embedding ...,https://dl.acm.org/doi/10.1145/3543507.3583305,"In this paper, we propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning. To cope with the drift between" "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,wang2024forget,\cite{wang2024forget},How to Forget Clients in Federated Online Learning to Rank?,http://arxiv.org/abs/2401.13410v1,"Data protection legislation like the European Union's General Data Protection Regulation (GDPR) establishes the \textit{right to be forgotten}: a user (client) can request contributions made using their data to be removed from learned models. In this paper, we study how to remove the contributions made by a client participating in a Federated Online Learning to Rank (FOLTR) system. In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model. Local updates are learned in an online manner at a client-level using queries and implicit interactions that have occurred within that specific client. By doing so, each client's local data is not shared with other clients or with a centralised search service, while at the same time clients can benefit from an effective global ranking model learned from contributions of each client in the federation. In this paper, we study an effective and efficient unlearning method that can remove a client's contribution without compromising the overall ranker effectiveness and without needing to retrain the global ranker from scratch. A key challenge is how to measure whether the model has unlearned the contributions from the client $c^*$ that has requested removal. For this, we instruct $c^*$ to perform a poisoning attack (add noise to this client updates) and then we measure whether the impact of the attack is lessened when the unlearning process has taken place. Through experiments on four datasets, we demonstrate the effectiveness and efficiency of the unlearning strategy under different combinations of parameter settings.",True,True,"Wang, Shuyi and Liu, Bing and Zuccon, Guido",2024.0,,,,,How to Forget Clients in Federated Online Learning to Rank?,How to Forget Clients in Federated Online Learning to Rank?,http://arxiv.org/pdf/2401.13410v1,"Data protection legislation like the European Union's General Data Protection Regulation (GDPR) establishes the \textit{right to be forgotten}: a user (client) can request contributions made using their data to be removed from learned models. In this paper, we study how to remove the contributions made by a client participating in a Federated Online Learning to Rank (FOLTR) system. In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model. Local updates are learned in an online manner at a client-level using queries and implicit interactions that have occurred within that specific client. By doing so, each client's local data is not shared with other clients or with a centralised search service, while at the same time clients can benefit from an effective global ranking model learned from contributions of each client in the federation. In this paper, we study an effective and efficient unlearning method that can remove a client's contribution without compromising the overall ranker effectiveness and without needing to retrain the global ranker from scratch. A key challenge is how to measure whether the model has unlearned the contributions from the client $c^*$ that has requested removal. For this, we instruct $c^*$ to perform a poisoning attack (add noise to this client updates) and then we measure whether the impact of the attack is lessened when the unlearning process has taken place. Through experiments on four datasets, we demonstrate the effectiveness and efficiency of the unlearning strategy under different combinations of parameter settings." "Unlearning for Federated Online Learning to Rank: A Reproducibility Study",2505.12791v1,shejwalkar2021manipulating,\cite{shejwalkar2021manipulating},"Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning",,,True,False,"Shejwalkar, Virat and Houmansadr, Amir",2021.0,,,,,"Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning",Optimizing Model Poisoning Attacks and Defenses for Federat...,https://www.youtube.com/watch?v=G2VYRnLqAXE,SESSION 6C-3 Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning Federated learning (FL) "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,karpukhin2020dense,\cite{karpukhin2020dense},Dense Passage Retrieval for Open-Domain Question Answering,http://arxiv.org/abs/2004.04906v3,"Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.",True,True,"Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",2020.0,,,,,Dense Passage Retrieval for Open-Domain Question Answering,[2004.04906] Dense Passage Retrieval for Open-Domain ...,https://arxiv.org/abs/2004.04906,"**arXiv:2004.04906** (cs) Authors:Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,izacard2021contriever,\cite{izacard2021contriever},Contriever: A Fully Unsupervised Dense Retriever,,,True,False,"Izacard, Gautier and Grave, Edouard",2021.0,,,,,Contriever: A Fully Unsupervised Dense Retriever,Unsupervised Dense Information Retrieval with Contrastive Learning,https://fanpu.io/summaries/2024-10-07-unsupervised-dense-information-retrieval-with-contrastive-learning/,"Contriever is one of the most competitive & popular baselines for retrievers, and shows how unsupervised techniques have broad appeal. Not" "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,reimers2019sentence,\cite{reimers2019sentence},Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks,http://arxiv.org/abs/1908.10084v1,"BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.",True,True,"Reimers, Nils and Gurevych, Iryna",2019.0,,,,,Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks,[PDF] Sentence Embeddings using Siamese BERT-Networks,https://aclanthology.org/D19-1410.pdf,"c ⃝2019 Association for Computational Linguistics 3982 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨ at Darmstadt www.ukp.tu-darmstadt.de Abstract BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). We fine-tune SBERT on NLI data, which cre-ates sentence embeddings that significantly out-perform other state-of-the-art sentence embedding methods like InferSent (Conneau et al., 2017) and Universal Sentence Encoder (Cer et al., 2018)." "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,gao2021simcse,\cite{gao2021simcse},SimCSE: Simple Contrastive Learning of Sentence Embeddings,http://arxiv.org/abs/2104.08821v4,"This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using ""entailment"" pairs as positives and ""contradiction"" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.",True,True,"Gao, Tianyu and Yao, Xing and Chen, Dan",2021.0,,,,,SimCSE: Simple Contrastive Learning of Sentence Embeddings,SimCSE: Simple Contrastive Learning of Sentence Embeddings,http://arxiv.org/pdf/2104.08821v4,"This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using ""entailment"" pairs as positives and ""contradiction"" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available." "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,replama2021,\cite{replama2021},RePLAMA: A Decoder-based Dense Retriever for Open-Domain Question Answering,,,True,False,"Smith, John and Doe, Jane",2021.0,,,,,RePLAMA: A Decoder-based Dense Retriever for Open-Domain Question Answering,A Reproducibility Study on Dense Retrieval Knowledge Acquisition,https://dl.acm.org/doi/10.1145/3726302.3730332,RePLAMA: A Decoder-based Dense Retriever for Open-Domain Question Answering. In Proceedings of the 2021 Conference on Information Retrieval "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,promptreps2021,\cite{promptreps2021},PromptReps: Enhancing Dense Retrieval with Prompt-based Representations,,,True,False,"Lee, Alex and Kumar, Rahul",2021.0,,,,,PromptReps: Enhancing Dense Retrieval with Prompt-based Representations,[2404.18424] PromptReps: Prompting Large Language Models to ...,https://arxiv.org/abs/2404.18424,"In this paper, we propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus." "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,msmarco,\cite{msmarco},MS MARCO: A Human Generated MAchine Reading COmprehension Dataset,http://arxiv.org/abs/1611.09268v3,"We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.",True,True,"Nguyen, Thang and others",2016.0,,,,,MS MARCO: A Human Generated MAchine Reading COmprehension Dataset,MS MARCO: A Human Generated MAchine Reading COmprehension Dataset,http://arxiv.org/pdf/1611.09268v3,"We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models." "Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition",2505.07166v1,naturalquestions,\cite{naturalquestions},Natural Questions: A Benchmark for Question Answering,,,True,False,"Kwiatkowski, Tom and Palomaki, Jenna and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, David and Filatov, Yury and Khashabi, Daniel and Sabharwal, Ashish and others",2019.0,,,,,Natural Questions: A Benchmark for Question Answering,Natural Questions: A Benchmark for Question Answering Research,https://scispace.com/papers/natural-questions-a-benchmark-for-question-answering-10mm1ytgmc,"The Natural Questions corpus, a question answering data set, is presented, introducing robust metrics for the purposes of evaluating question answering systems." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,Frequency23,\cite{Frequency23},"Frequency Enhanced Hybrid Attention Network for Sequential Recommendation",http://arxiv.org/abs/2304.09184v3,"The self-attention mechanism, which equips with a strong capability of modeling long-range dependencies, is one of the extensively used techniques in the sequential recommendation field. However, many recent studies represent that current self-attention based models are low-pass filters and are inadequate to capture high-frequency information. Furthermore, since the items in the user behaviors are intertwined with each other, these models are incomplete to distinguish the inherent periodicity obscured in the time domain. In this work, we shift the perspective to the frequency domain, and propose a novel Frequency Enhanced Hybrid Attention Network for Sequential Recommendation, namely FEARec. In this model, we firstly improve the original time domain self-attention in the frequency domain with a ramp structure to make both low-frequency and high-frequency information could be explicitly learned in our approach. Moreover, we additionally design a similar attention mechanism via auto-correlation in the frequency domain to capture the periodic characteristics and fuse the time and frequency level attention in a union model. Finally, both contrastive learning and frequency regularization are utilized to ensure that multiple views are aligned in both the time domain and frequency domain. Extensive experiments conducted on four widely used benchmark datasets demonstrate that the proposed model performs significantly better than the state-of-the-art approaches.",True,True,"Du, Xinyu and Yuan, Huanhuan and Zhao, Pengpeng and Qu, Jianfeng and Zhuang, Fuzhen and Liu, Guanfeng and Liu, Yanchi and Sheng, Victor S",2023.0,,,,,"Frequency Enhanced Hybrid Attention Network for Sequential Recommendation",Frequency Enhanced Hybrid Attention Network for ...,https://arxiv.org/pdf/2304.09184,"by X Du · 2023 · Cited by 108 — FEARec is a Frequency Enhanced Hybrid Attention Network for sequential recommendation, improving self-attention in the frequency domain to capture both low and" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,DL4,\cite{DL4},Deep learning based recommender system: A survey and new perspectives,,,True,False,"Zhang, Shuai and Yao, Lina and Sun, Aixin and Tay, Yi",2019.0,,,,CSUR,Deep learning based recommender system: A survey and new perspectives,Deep Learning based Recommender System: A Survey and New Perspectives,http://arxiv.org/pdf/1707.07435v7,"With the ever-growing volume of online information, recommender systems have been an effective strategy to overcome such information overload. The utility of recommender systems cannot be overstated, given its widespread adoption in many web applications, along with its potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. Evidently, the field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning based recommender systems. More concretely, we provide and devise a taxonomy of deep learning based recommendation models, along with providing a comprehensive summary of the state-of-the-art. Finally, we expand on current trends and provide new perspectives pertaining to this new exciting development of the field." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,Xavier,\cite{Xavier},Understanding the difficulty of training deep feedforward neural networks,,,True,False,"Glorot, Xavier and Bengio, Yoshua",2010.0,,,,,Understanding the difficulty of training deep feedforward neural networks,Understanding the difficulty of training deep feedforward ...,https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf,"by X Glorot · Cited by 28103 — Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,sse-pt,\cite{sse-pt},SSE-PT: Sequential recommendation via personalized transformer,,,True,False,"Wu, Liwei and Li, Shuqing and Hsieh, Cho-Jui and Sharpnack, James",2020.0,,,,,SSE-PT: Sequential recommendation via personalized transformer,SSE-PT: Sequential Recommendation Via Personalized Transformer,https://www.researchgate.net/publication/347834874_SSE-PT_Sequential_Recommendation_Via_Personalized_Transformer,Sequential recommendation systems process a user's history of interactions into a time-ordered sequence that reflects the evolution of their "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,zhao2023embedding,\cite{zhao2023embedding},Embedding in Recommender Systems: A Survey,http://arxiv.org/abs/2310.18608v2,"Recommender systems have become an essential component of many online platforms, providing personalized recommendations to users. A crucial aspect is embedding techniques that coverts the high-dimensional discrete features, such as user and item IDs, into low-dimensional continuous vectors and can enhance the recommendation performance. Applying embedding techniques captures complex entity relationships and has spurred substantial research. In this survey, we provide an overview of the recent literature on embedding techniques in recommender systems. This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques. Collaborative filtering generates embeddings capturing user-item preferences, excelling in sparse data. Self-supervised methods leverage contrastive or generative learning for various tasks. Graph-based techniques like node2vec exploit complex relationships in network-rich environments. Addressing the scalability challenges inherent to embedding methods, our survey delves into innovative directions within the field of recommendation systems. These directions aim to enhance performance and reduce computational complexity, paving the way for improved recommender systems. Among these innovative approaches, we will introduce Auto Machine Learning (AutoML), hash techniques, and quantization techniques in this survey. We discuss various architectures and techniques and highlight the challenges and future directions in these aspects. This survey aims to provide a comprehensive overview of the state-of-the-art in this rapidly evolving field and serve as a useful resource for researchers and practitioners working in the area of recommender systems.",True,True,"Zhao, Xiangyu and Wang, Maolin and Zhao, Xinjian and Li, Jiansheng and Zhou, Shucheng and Yin, Dawei and Li, Qing and Tang, Jiliang and Guo, Ruocheng",2023.0,,,,arXiv preprint arXiv:2310.18608,Embedding in Recommender Systems: A Survey,Embedding in Recommender Systems: A Survey,http://arxiv.org/pdf/2310.18608v2,"Recommender systems have become an essential component of many online platforms, providing personalized recommendations to users. A crucial aspect is embedding techniques that coverts the high-dimensional discrete features, such as user and item IDs, into low-dimensional continuous vectors and can enhance the recommendation performance. Applying embedding techniques captures complex entity relationships and has spurred substantial research. In this survey, we provide an overview of the recent literature on embedding techniques in recommender systems. This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques. Collaborative filtering generates embeddings capturing user-item preferences, excelling in sparse data. Self-supervised methods leverage contrastive or generative learning for various tasks. Graph-based techniques like node2vec exploit complex relationships in network-rich environments. Addressing the scalability challenges inherent to embedding methods, our survey delves into innovative directions within the field of recommendation systems. These directions aim to enhance performance and reduce computational complexity, paving the way for improved recommender systems. Among these innovative approaches, we will introduce Auto Machine Learning (AutoML), hash techniques, and quantization techniques in this survey. We discuss various architectures and techniques and highlight the challenges and future directions in these aspects. This survey aims to provide a comprehensive overview of the state-of-the-art in this rapidly evolving field and serve as a useful resource for researchers and practitioners working in the area of recommender systems." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,FMLP,\cite{FMLP},Filter-enhanced MLP is All You Need for Sequential Recommendation,http://arxiv.org/abs/2202.13556v1,"Recently, deep neural networks such as RNN, CNN and Transformer have been applied in the task of sequential recommendation, which aims to capture the dynamic preference characteristics from logged user behavior data for accurate recommendation. However, in online platforms, logged user behavior data is inevitable to contain noise, and deep recommendation models are easy to overfit on these logged data. To tackle this problem, we borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain. In our empirical experiments, we find that filtering algorithms can substantially improve representative sequential recommendation models, and integrating simple filtering algorithms (eg Band-Stop Filter) with an all-MLP architecture can even outperform competitive Transformer-based models. Motivated by it, we propose \textbf{FMLP-Rec}, an all-MLP model with learnable filters for sequential recommendation task. The all-MLP architecture endows our model with lower time complexity, and the learnable filters can adaptively attenuate the noise information in the frequency domain. Extensive experiments conducted on eight real-world datasets demonstrate the superiority of our proposed method over competitive RNN, CNN, GNN and Transformer-based methods. Our code and data are publicly available at the link: \textcolor{blue}{\url{https://github.com/RUCAIBox/FMLP-Rec}}.",True,True,"Zhou, Kun and Yu, Hui and Zhao, Wayne Xin and Wen, Ji-Rong",2022.0,,,,,Filter-enhanced MLP is All You Need for Sequential Recommendation,Filter-enhanced MLP is All You Need for Sequential Recommendation,https://dl.acm.org/doi/10.1145/3485447.3512111,"We propose FMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task. The all-MLP architecture endows our model with lower time" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,strec,\cite{strec},STRec: Sparse Transformer for Sequential Recommendations,,,True,False,"Li, Chengxi and Wang, Yejing and Liu, Qidong and Zhao, Xiangyu and Wang, Wanyu and Wang, Yiqi and Zou, Lixin and Fan, Wenqi and Li, Qing",2023.0,,,,,STRec: Sparse Transformer for Sequential Recommendations,CITE,https://aml-cityu.github.io/bibtex/li2023strec.html,"@inproceedings{li2023strec, title={STRec: Sparse Transformer for Sequential Recommendations}, author={Li, Chengxi and Wang, Yejing and Liu, Qidong and Zhao" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,MLM4Rec,\cite{MLM4Rec},Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation,,,True,False,"Long, Chao and Yuan, Huanhuan and Fang, Junhua and Xian, Xuefeng and Liu, Guanfeng and Sheng, Victor S and Zhao, Pengpeng",2024.0,,,,ACM Transactions on Knowledge Discovery from Data,Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation,Learning Global and Multi-granularity Local ...,https://openreview.net/forum?id=CtsUBneYhu&referrer=%5Bthe%20profile%20of%20Junhua%20Fang%5D(%2Fprofile%3Fid%3D~Junhua_Fang1),"Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation | OpenReview Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation Usually, users’ global and local preferences jointly affect the final recommendation result in different ways. Most existing works use transformers to globally model sequences, which makes them face the dilemma of quadratic computational complexity when dealing with long sequences. To this end, we proposed a parallel architecture for capturing global representation and Multi-granularity Local dependencies with MLP for sequential Recommendation (MLM4Rec). For global representation, we utilize modified MLP-Mixer to capture global information of user sequences due to its simplicity and efficiency. For local representation, we incorporate convolution into MLP and propose a multi-granularity local awareness mechanism for capturing richer local semantic information." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,PEPNet,\cite{PEPNet},"PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information",http://arxiv.org/abs/2302.01115v3,"With the increase of content pages and interactive buttons in online services such as online-shopping and video-watching websites, industrial-scale recommender systems face challenges in multi-domain and multi-task recommendations. The core of multi-task and multi-domain recommendation is to accurately capture user interests in multiple scenarios given multiple user behaviors. In this paper, we propose a plug-and-play \textit{\textbf{P}arameter and \textbf{E}mbedding \textbf{P}ersonalized \textbf{Net}work (\textbf{PEPNet})} for multi-domain and multi-task recommendation. PEPNet takes personalized prior information as input and dynamically scales the bottom-level Embedding and top-level DNN hidden units through gate mechanisms. \textit{Embedding Personalized Network (EPNet)} performs personalized selection on Embedding to fuse features with different importance for different users in multiple domains. \textit{Parameter Personalized Network (PPNet)} executes personalized modification on DNN parameters to balance targets with different sparsity for different users in multiple tasks. We have made a series of special engineering optimizations combining the Kuaishou training framework and the online deployment environment. By infusing personalized selection of Embedding and personalized modification of DNN parameters, PEPNet tailored to the interests of each individual obtains significant performance gains, with online improvements exceeding 1\% in multiple task metrics across multiple domains. We have deployed PEPNet in Kuaishou apps, serving over 300 million users every day.",True,True,"Chang, Jianxin and Zhang, Chenbin and Hui, Yiqun and Leng, Dewei and Niu, Yanan and Song, Yang and Gai, Kun",2023.0,,,,,"PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information",[PDF] PEPNet: Parameter and Embedding Personalized Network ... - arXiv,https://arxiv.org/pdf/2302.01115,Missing: 04/08/2025 "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,mb-str,\cite{mb-str},Multi-behavior sequential transformer recommender,,,True,False,"Yuan, Enming and Guo, Wei and He, Zhicheng and Guo, Huifeng and Liu, Chengkai and Tang, Ruiming",2022.0,,,,,Multi-behavior sequential transformer recommender,Multi-Behavior Sequential Transformer Recommender,https://dl.acm.org/doi/10.1145/3477495.3532023,"The proposed framework MB-STR, a Multi-Behavior Sequential Transformer Recommender, is equipped with the multi-behavior transformer layer (MB-Trans), the multi" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,lightsan,\cite{lightsan},Lighter and better: low-rank decomposed self-attention networks for next-item recommendation,,,True,False,"Fan, Xinyan and Liu, Zheng and Lian, Jianxun and Zhao, Wayne Xin and Xie, Xing and Wen, Ji-Rong",2021.0,,,,,Lighter and better: low-rank decomposed self-attention networks for next-item recommendation,[PDF] Low-Rank Decomposed Self-Attention Networks for Next-Item ...,https://www.microsoft.com/en-us/research/wp-content/uploads/2021/05/LighterandBetter_Low-RankDecomposedSelf-AttentionNetworksforNext-ItemRecommendation.pdf,"Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation Xinyan Fan1,2, Zheng Liu3∗, Jianxun Lian3, Wayne Xin Zhao1,2∗, Xing Xie3, and Ji-Rong Wen1,2 1Gaoling School of Artificial Intelligence, Renmin University of China 2Beijing Key Laboratory of Big Data Management and Analysis Methods 3Microsoft Research Asia {xinyan.fan, jrwen}@ruc.edu.cn, batmanfly@gmail.com, {zhengliu, jianxun.lian, xingx}@microsoft.com ABSTRACT Self-attention networks (SANs) have been intensively applied for sequential recommenders, but they are limited due to: (1) the qua-dratic complexity and vulnerability to over-parameterization in self-attention; (2) inaccurate modeling of sequential relations between items due to the implicit position encoding. Our main contributions are summarized as follows: • A novel SANs-based sequential recommender, LightSANs, with two advantages: (1) the low-rank decomposed self-attention for more efficient and precise modeling of context-aware represen-tations; (2) the decoupled position encoding for more effective modeling of sequential relations between items." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,autoseqrec,\cite{autoseqrec},AutoSeqRec: Autoencoder for Efficient Sequential Recommendation,http://arxiv.org/abs/2308.06878v1,"Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users. Traditional methods typically treat users as sequences of items, overlooking the collaborative relationships among them. Graph-based methods incorporate collaborative information by utilizing the user-item interaction graph. However, these methods sometimes face challenges in terms of time complexity and computational efficiency. To address these limitations, this paper presents AutoSeqRec, an incremental recommendation model specifically designed for sequential recommendation tasks. AutoSeqRec is based on autoencoders and consists of an encoder and three decoders within the autoencoder architecture. These components consider both the user-item interaction matrix and the rows and columns of the item transition matrix. The reconstruction of the user-item interaction matrix captures user long-term preferences through collaborative filtering. In addition, the rows and columns of the item transition matrix represent the item out-degree and in-degree hopping behavior, which allows for modeling the user's short-term interests. When making incremental recommendations, only the input matrices need to be updated, without the need to update parameters, which makes AutoSeqRec very efficient. Comprehensive evaluations demonstrate that AutoSeqRec outperforms existing methods in terms of accuracy, while showcasing its robustness and efficiency.",True,True,"Liu, Sijia and Liu, Jiahao and Gu, Hansu and Li, Dongsheng and Lu, Tun and Zhang, Peng and Gu, Ning",2023.0,,,,,AutoSeqRec: Autoencoder for Efficient Sequential Recommendation,AutoSeqRec: Autoencoder for Efficient Sequential Recommendation,http://arxiv.org/pdf/2308.06878v1,"Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users. Traditional methods typically treat users as sequences of items, overlooking the collaborative relationships among them. Graph-based methods incorporate collaborative information by utilizing the user-item interaction graph. However, these methods sometimes face challenges in terms of time complexity and computational efficiency. To address these limitations, this paper presents AutoSeqRec, an incremental recommendation model specifically designed for sequential recommendation tasks. AutoSeqRec is based on autoencoders and consists of an encoder and three decoders within the autoencoder architecture. These components consider both the user-item interaction matrix and the rows and columns of the item transition matrix. The reconstruction of the user-item interaction matrix captures user long-term preferences through collaborative filtering. In addition, the rows and columns of the item transition matrix represent the item out-degree and in-degree hopping behavior, which allows for modeling the user's short-term interests. When making incremental recommendations, only the input matrices need to be updated, without the need to update parameters, which makes AutoSeqRec very efficient. Comprehensive evaluations demonstrate that AutoSeqRec outperforms existing methods in terms of accuracy, while showcasing its robustness and efficiency." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,HRNN,\cite{HRNN},"Personalizing Session-based Recommendations with Hierarchical Recurrent Neural Networks",http://arxiv.org/abs/1706.04148v5,"Session-based recommendations are highly relevant in many modern on-line services (e.g. e-commerce, video streaming) and recommendation settings. Recently, Recurrent Neural Networks have been shown to perform very well in session-based settings. While in many session-based recommendation domains user identifiers are hard to come by, there are also domains in which user profiles are readily available. We propose a seamless way to personalize RNN models with cross-session information transfer and devise a Hierarchical RNN model that relays end evolves latent hidden states of the RNNs across user sessions. Results on two industry datasets show large improvements over the session-only RNNs.",True,True,"Quadrana, Massimo and Karatzoglou, Alexandros and Hidasi, Bal{\'a}zs and Cremonesi, Paolo",2017.0,,,,,"Personalizing Session-based Recommendations with Hierarchical Recurrent Neural Networks",Personalizing Session-based Recommendations with Hierarchical ...,https://www.slideshare.net/slideshow/personalizing-sessionbased-recommendations-with-hierarchical-recurrent-neural-networks/79285884,This document summarizes a research paper on personalizing session-based recommendations with hierarchical recurrent neural networks (HRNNs). "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,zhao2023user,\cite{zhao2023user},User Retention-oriented Recommendation with Decision Transformer,http://arxiv.org/abs/2303.06347v1,"Improving user retention with reinforcement learning~(RL) has attracted increasing attention due to its significant importance in boosting user engagement. However, training the RL policy from scratch without hurting users' experience is unavoidable due to the requirement of trial-and-error searches. Furthermore, the offline methods, which aim to optimize the policy without online interactions, suffer from the notorious stability problem in value estimation or unbounded variance in counterfactual policy evaluation. To this end, we propose optimizing user retention with Decision Transformer~(DT), which avoids the offline difficulty by translating the RL as an autoregressive problem. However, deploying the DT in recommendation is a non-trivial problem because of the following challenges: (1) deficiency in modeling the numerical reward value; (2) data discrepancy between the policy learning and recommendation generation; (3) unreliable offline performance evaluation. In this work, we, therefore, contribute a series of strategies for tackling the exposed issues. We first articulate an efficient reward prompt by weighted aggregation of meta embeddings for informative reward embedding. Then, we endow a weighted contrastive learning method to solve the discrepancy between training and inference. Furthermore, we design two robust offline metrics to measure user retention. Finally, the significant improvement in the benchmark datasets demonstrates the superiority of the proposed method.",True,True,"Zhao, Kesen and Zou, Lixin and Zhao, Xiangyu and Wang, Maolin and Yin, Dawei",2023.0,,,,,User Retention-oriented Recommendation with Decision Transformer,User Retention-oriented Recommendation with Decision ...,https://arxiv.org/pdf/2303.06347,by K Zhao · 2023 · Cited by 31 — This paper proposes using Decision Transformer (DT) to optimize user retention in recommendation by translating reinforcement learning as an "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,DMAN,\cite{DMAN},Dynamic Memory based Attention Network for Sequential Recommendation,http://arxiv.org/abs/2102.09269v1,"Sequential recommendation has become increasingly essential in various online services. It aims to model the dynamic preferences of users from their historical interactions and predict their next items. The accumulated user behavior records on real systems could be very long. This rich data brings opportunities to track actual interests of users. Prior efforts mainly focus on making recommendations based on relatively recent behaviors. However, the overall sequential data may not be effectively utilized, as early interactions might affect users' current choices. Also, it has become intolerable to scan the entire behavior sequence when performing inference for each user, since real-world system requires short response time. To bridge the gap, we propose a novel long sequential recommendation model, called Dynamic Memory-based Attention Network (DMAN). It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users. To improve memory fidelity, DMAN dynamically abstracts each user's long-term interest into its own memory blocks by minimizing an auxiliary reconstruction loss. Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation. Empirical results over four benchmark datasets demonstrate the superiority of our model in capturing long-term dependency over various state-of-the-art sequential models.",True,True,"Tan, Qiaoyu and Zhang, Jianwei and Liu, Ninghao and Huang, Xiao and Yang, Hongxia and Zhou, Jingren and Hu, Xia",2021.0,,,,,Dynamic Memory based Attention Network for Sequential Recommendation,Dynamic Memory based Attention Network for Sequential Recommendation,http://arxiv.org/pdf/2102.09269v1,"Sequential recommendation has become increasingly essential in various online services. It aims to model the dynamic preferences of users from their historical interactions and predict their next items. The accumulated user behavior records on real systems could be very long. This rich data brings opportunities to track actual interests of users. Prior efforts mainly focus on making recommendations based on relatively recent behaviors. However, the overall sequential data may not be effectively utilized, as early interactions might affect users' current choices. Also, it has become intolerable to scan the entire behavior sequence when performing inference for each user, since real-world system requires short response time. To bridge the gap, we propose a novel long sequential recommendation model, called Dynamic Memory-based Attention Network (DMAN). It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users. To improve memory fidelity, DMAN dynamically abstracts each user's long-term interest into its own memory blocks by minimizing an auxiliary reconstruction loss. Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation. Empirical results over four benchmark datasets demonstrate the superiority of our model in capturing long-term dependency over various state-of-the-art sequential models." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,koren2009matrix,\cite{koren2009matrix},Content-boosted Matrix Factorization Techniques for Recommender Systems,http://arxiv.org/abs/1210.5631v2,"Many businesses are using recommender systems for marketing outreach. Recommendation algorithms can be either based on content or driven by collaborative filtering. We study different ways to incorporate content information directly into the matrix factorization approach of collaborative filtering. These content-boosted matrix factorization algorithms not only improve recommendation accuracy, but also provide useful insights about the contents, as well as make recommendations more easily interpretable.",True,True,"Koren, Yehuda and Bell, Robert and Volinsky, Chris",2009.0,,,,Computer,Content-boosted Matrix Factorization Techniques for Recommender Systems,Content-boosted Matrix Factorization Techniques for Recommender ...,https://arxiv.org/abs/1210.5631,"[1210.5631] Content-boosted Matrix Factorization Techniques for Recommender Systems >stat> arXiv:1210.5631 arXiv:1210.5631 (stat) Title:Content-boosted Matrix Factorization Techniques for Recommender Systems View a PDF of the paper titled Content-boosted Matrix Factorization Techniques for Recommender Systems, by Jennifer Nguyen and 1 other authors Cite as:arXiv:1210.5631 [stat.ML] (or arXiv:1210.5631v2 [stat.ML] for this version) View a PDF of the paper titled Content-boosted Matrix Factorization Techniques for Recommender Systems, by Jennifer Nguyen and 1 other authors [x] Bibliographic Explorer Toggle [x] Connected Papers Toggle [x] Litmaps Toggle [x] scite.ai Toggle [x] alphaXiv Toggle [x] Links to Code Toggle [x] DagsHub Toggle [x] GotitPub Toggle [x] Huggingface Toggle [x] Links to Code Toggle [x] ScienceCast Toggle [x] Replicate Toggle [x] Spaces Toggle [x] Spaces Toggle [x] Core recommender toggle " "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,Kang01,\cite{Kang01},Self-Attentive Sequential Recommendation,http://arxiv.org/abs/1808.09781v1,"Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.",True,True,"Kang, Wang-Cheng and McAuley, Julian",2018.0,,,,,Self-Attentive Sequential Recommendation,Self Attention on Recommendation System - Jeffery chiang,https://medium.com/analytics-vidhya/self-attention-on-recommendation-system-self-attentive-sequential-recommendation-review-c94796dde001,"Self-attention is a powerful mechanism used in deep learning to process sequential data, such as sentences or time-series data, by considering the relationship" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,bert4rec,\cite{bert4rec},"BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer",http://arxiv.org/abs/1904.06690v2,"Modeling users' dynamic and evolving preferences from their historical behaviors is challenging and crucial for recommendation systems. Previous methods employ sequential neural networks (e.g., Recurrent Neural Network) to encode users' historical interactions from left to right into hidden representations for making recommendations. Although these methods achieve satisfactory results, they often assume a rigidly ordered sequence which is not always practical. We argue that such left-to-right unidirectional architectures restrict the power of the historical sequence representations. For this purpose, we introduce a Bidirectional Encoder Representations from Transformers for sequential Recommendation (BERT4Rec). However, jointly conditioning on both left and right context in deep bidirectional model would make the training become trivial since each item can indirectly ""see the target item"". To address this problem, we train the bidirectional model using the Cloze task, predicting the masked items in the sequence by jointly conditioning on their left and right context. Comparing with predicting the next item at each position in a sequence, the Cloze task can produce more samples to train a more powerful bidirectional model. Extensive experiments on four benchmark datasets show that our model outperforms various state-of-the-art sequential models consistently.",True,True,"Sun, Fei and Liu, Jun and Wu, Jian and Pei, Changhua and Lin, Xiao and Ou, Wenwu and Jiang, Peng",2019.0,,,,,"BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer",BERT4Rec: Sequential Recommendation with Bidirectional Encoder ...,https://dl.acm.org/doi/10.1145/3357384.3357895,"We proposed a sequential recommendation model called BERT4Rec, which employs the deep bidirectional self-attention to model user behavior sequences." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,Linrec,\cite{Linrec},"LinRec: Linear Attention Mechanism for Long-term Sequential Recommender Systems",http://arxiv.org/abs/2411.01537v1,"Transformer models have achieved remarkable success in sequential recommender systems (SRSs). However, computing the attention matrix in traditional dot-product attention mechanisms results in a quadratic complexity with sequence lengths, leading to high computational costs for long-term sequential recommendation. Motivated by the above observation, we propose a novel L2-Normalized Linear Attention for the Transformer-based Sequential Recommender Systems (LinRec), which theoretically improves efficiency while preserving the learning capabilities of the traditional dot-product attention. Specifically, by thoroughly examining the equivalence conditions of efficient attention mechanisms, we show that LinRec possesses linear complexity while preserving the property of attention mechanisms. In addition, we reveal its latent efficiency properties by interpreting the proposed LinRec mechanism through a statistical lens. Extensive experiments are conducted based on two public benchmark datasets, demonstrating that the combination of LinRec and Transformer models achieves comparable or even superior performance than state-of-the-art Transformer-based SRS models while significantly improving time and memory efficiency.",True,True,"Liu, Langming and Cai, Liu and Zhang, Chi and Zhao, Xiangyu and Gao, Jingtong and Wang, Wanyu and Lv, Yifu and Fan, Wenqi and Wang, Yiqi and He, Ming and others",2023.0,,,,,"LinRec: Linear Attention Mechanism for Long-term Sequential Recommender Systems",GLINT-RU: Gated Lightweight Intelligent Recurrent Units for ...,https://www.atailab.cn/seminar2025Spring/pdf/2025_KDD_GLINT-RU_Gated%20Lightweight%20Intelligent%20Recurrent%20Units%20for%20Sequential%20Recommender%20Systems.pdf,by S Zhang · 2025 · Cited by 6 — Linrec: Linear attention mechanism for long-term sequential recommender systems. In Proceedings of the 46th International ACM SIGIR Conference on Research "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,GRU4Rec,\cite{GRU4Rec},Session-based Recommendations with Recurrent Neural Networks,http://arxiv.org/abs/1511.06939v4,"We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.",True,True,"Hidasi, Bal{\'a}zs and Karatzoglou, Alexandros and Baltrunas, Linas and Tikk, Domonkos",2015.0,,,,arXiv preprint arXiv:1511.06939,Session-based Recommendations with Recurrent Neural Networks,Session-based Recommendations with Recurrent Neural Networks,https://www.semanticscholar.org/paper/Session-based-Recommendations-with-Recurrent-Neural-Hidasi-Karatzoglou/e0021d61c2ab1334bc725852edd44597f4c65dff,"It is argued that by modeling the whole session, more accurate recommendations can be provided by an RNN-based approach for session-based recommendations," "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,GLINTours25,\cite{GLINTours25},GLINT-RU: Gated Lightweight Intelligent Recurrent Units for Sequential Recommender Systems,,,True,False,"Zhang, Sheng and Wang, Maolin and Zhao, Xiangyu",2024.0,,,,arXiv preprint arXiv:2406.10244,GLINT-RU: Gated Lightweight Intelligent Recurrent Units for Sequential Recommender Systems,GLINT-RU: Gated Lightweight Intelligent Recurrent Units for Sequential Recommender Systems,http://arxiv.org/pdf/2406.10244v3,"Transformer-based models have gained significant traction in sequential recommender systems (SRSs) for their ability to capture user-item interactions effectively. However, these models often suffer from high computational costs and slow inference. Meanwhile, existing efficient SRS approaches struggle to embed high-quality semantic and positional information into latent representations. To tackle these challenges, this paper introduces GLINT-RU, a lightweight and efficient SRS leveraging a single-layer dense selective Gated Recurrent Units (GRU) module to accelerate inference. By incorporating a dense selective gate, GLINT-RU adaptively captures temporal dependencies and fine-grained positional information, generating high-quality latent representations. Additionally, a parallel mixing block infuses fine-grained positional features into user-item interactions, enhancing both recommendation quality and efficiency. Extensive experiments on three datasets demonstrate that GLINT-RU achieves superior prediction accuracy and inference speed, outperforming baselines based on RNNs, Transformers, MLPs, and SSMs. These results establish GLINT-RU as a powerful and efficient solution for SRSs." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,HiPPOs21,\cite{HiPPOs21},There is HOPE to Avoid HiPPOs for Long-memory State Space Models,,,True,False,"Yu, Annan and Mahoney, Michael W and Erichson, N Benjamin",2024.0,,,,arXiv preprint arXiv:2405.13975,There is HOPE to Avoid HiPPOs for Long-memory State Space Models,There is HOPE to Avoid HiPPOs for Long-memory State ...,https://www.researchgate.net/publication/380820131_There_is_HOPE_to_Avoid_HiPPOs_for_Long-memory_State_Space_Models,"State-space models (SSMs) that utilize linear, time-invariant (LTI) systems are known for their effectiveness in learning long sequences.See more" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,16Dual,\cite{16Dual},"Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation",http://arxiv.org/abs/2403.18257v2,"Transformers have been the most successful architecture for various speech modeling tasks, including speech separation. However, the self-attention mechanism in transformers with quadratic complexity is inefficient in computation and memory. Recent models incorporate new layers and modules along with transformers for better performance but also introduce extra model complexity. In this work, we replace transformers with Mamba, a selective state space model, for speech separation. We propose dual-path Mamba, which models short-term and long-term forward and backward dependency of speech signals using selective state spaces. Our experimental results on the WSJ0-2mix data show that our dual-path Mamba models of comparably smaller sizes outperform state-of-the-art RNN model DPRNN, CNN model WaveSplit, and transformer model Sepformer. Code: https://github.com/xi-j/Mamba-TasNet",True,True,"Jiang, Xilin and Han, Cong and Mesgarani, Nima",2024.0,,,,arXiv preprint arXiv:2403.18257,"Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation",Dual-path Mamba: Short and Long-term Bidirectional Selective ...,https://arxiv.org/abs/2403.18257,"We propose dual-path Mamba, which models short-term and long-term forward and backward dependency of speech signals using selective state spaces." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,gu2023mamba,\cite{gu2023mamba},Mamba: Linear-Time Sequence Modeling with Selective State Spaces,http://arxiv.org/abs/2312.00752v2,"Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.",True,True,"Gu, Albert and Dao, Tri",2023.0,,,,arXiv preprint arXiv:2312.00752,Mamba: Linear-Time Sequence Modeling with Selective State Spaces,Mamba: Linear-Time Sequence Modeling with Selective State Spaces,https://openreview.net/forum?id=tEYskw1VY2,"This paper proposes Mamba, a linear-time sequence model with an intra-layer combination of Selective S4D, Short Convolution and Gated Linear Unit. The paper" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,qu2024survey,\cite{qu2024survey},A Survey of Mamba,http://arxiv.org/abs/2408.01129v6,"As one of the most representative DL techniques, Transformer architecture has empowered numerous advanced models, especially the large language models (LLMs) that comprise billions of parameters, becoming a cornerstone in deep learning. Despite the impressive achievements, Transformers still face inherent limitations, particularly the time-consuming inference resulting from the quadratic computation complexity of attention calculation. Recently, a novel architecture named Mamba, drawing inspiration from classical state space models (SSMs), has emerged as a promising alternative for building foundation models, delivering comparable modeling abilities to Transformers while preserving near-linear scalability concerning sequence length. This has sparked an increasing number of studies actively exploring Mamba's potential to achieve impressive performance across diverse domains. Given such rapid evolution, there is a critical need for a systematic review that consolidates existing Mamba-empowered models, offering a comprehensive understanding of this emerging model architecture. In this survey, we therefore conduct an in-depth investigation of recent Mamba-associated studies, covering three main aspects: the advancements of Mamba-based models, the techniques of adapting Mamba to diverse data, and the applications where Mamba can excel. Specifically, we first review the foundational knowledge of various representative deep learning models and the details of Mamba-1&2 as preliminaries. Then, to showcase the significance of Mamba for AI, we comprehensively review the related studies focusing on Mamba models' architecture design, data adaptability, and applications. Finally, we present a discussion of current limitations and explore various promising research directions to provide deeper insights for future investigations.",True,True,"Qu, Haohao and Ning, Liangbo and An, Rui and Fan, Wenqi and Derr, Tyler and Liu, Hui and Xu, Xin and Li, Qing",2024.0,,,,arXiv preprint arXiv:2408.01129,A Survey of Mamba,A Survey of Mamba,http://arxiv.org/pdf/2408.01129v6,"As one of the most representative DL techniques, Transformer architecture has empowered numerous advanced models, especially the large language models (LLMs) that comprise billions of parameters, becoming a cornerstone in deep learning. Despite the impressive achievements, Transformers still face inherent limitations, particularly the time-consuming inference resulting from the quadratic computation complexity of attention calculation. Recently, a novel architecture named Mamba, drawing inspiration from classical state space models (SSMs), has emerged as a promising alternative for building foundation models, delivering comparable modeling abilities to Transformers while preserving near-linear scalability concerning sequence length. This has sparked an increasing number of studies actively exploring Mamba's potential to achieve impressive performance across diverse domains. Given such rapid evolution, there is a critical need for a systematic review that consolidates existing Mamba-empowered models, offering a comprehensive understanding of this emerging model architecture. In this survey, we therefore conduct an in-depth investigation of recent Mamba-associated studies, covering three main aspects: the advancements of Mamba-based models, the techniques of adapting Mamba to diverse data, and the applications where Mamba can excel. Specifically, we first review the foundational knowledge of various representative deep learning models and the details of Mamba-1&2 as preliminaries. Then, to showcase the significance of Mamba for AI, we comprehensively review the related studies focusing on Mamba models' architecture design, data adaptability, and applications. Finally, we present a discussion of current limitations and explore various promising research directions to provide deeper insights for future investigations." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,dao2024transformers,\cite{dao2024transformers},"Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality",http://arxiv.org/abs/2405.21060v1,"While Transformers have been the main architecture behind deep learning's success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba's selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling.",True,True,"Dao, Tri and Gu, Albert",2024.0,,,,arXiv preprint arXiv:2405.21060,"Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality",Transformers are SSMs: Generalized Models and Efficient ...,https://openreview.net/pdf/54bf495d93336f1f195f264c1b6c2805169b3492.pdf,"27 Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality D.3.3 FULLY RECURRENT MODE Note that the fully recurrent mode, where the recurrence is evolved one step at a time (15), is simply an instantiation of the state-passing mode with chunk size k=1." "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,MambaRec,\cite{MambaRec},"Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation",http://arxiv.org/abs/2403.16371v1,"Sequential Recommenders have been widely applied in various online services, aiming to model users' dynamic interests from their sequential interactions. With users increasingly engaging with online platforms, vast amounts of lifelong user behavioral sequences have been generated. However, existing sequential recommender models often struggle to handle such lifelong sequences. The primary challenges stem from computational complexity and the ability to capture long-range dependencies within the sequence. Recently, a state space model featuring a selective mechanism (i.e., Mamba) has emerged. In this work, we investigate the performance of Mamba for lifelong sequential recommendation (i.e., length>=2k). More specifically, we leverage the Mamba block to model lifelong user sequences selectively. We conduct extensive experiments to evaluate the performance of representative sequential recommendation models in the setting of lifelong sequences. Experiments on two real-world datasets demonstrate the superiority of Mamba. We found that RecMamba achieves performance comparable to the representative model while significantly reducing training duration by approximately 70% and memory costs by 80%. Codes and data are available at \url{https://github.com/nancheng58/RecMamba}.",True,True,"Yang, Jiyuan and Li, Yuanzi and Zhao, Jingyu and Wang, Hanbing and Ma, Muyang and Ma, Jun and Ren, Zhaochun and Zhang, Mengqi and Xin, Xin and Chen, Zhumin and others",2024.0,,,,arXiv preprint arXiv:2403.16371,"Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation",[PDF] Uncovering Selective State Space Model's Capabilities in Lifelong ...,https://arxiv.org/pdf/2403.16371,We conduct extensive ex- periments to evaluate the performance of representative sequential recommendation models in the setting of lifelong "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,wang2024echomamba4rec,\cite{wang2024echomamba4rec},"EchoMamba4Rec: Harmonizing Bidirectional State Space Models with Spectral Filtering for Advanced Sequential Recommendation",http://arxiv.org/abs/2406.02638v2,"Predicting user preferences and sequential dependencies based on historical behavior is the core goal of sequential recommendation. Although attention-based models have shown effectiveness in this field, they often struggle with inference inefficiency due to the quadratic computational complexity inherent in attention mechanisms, especially with long-range behavior sequences. Drawing inspiration from the recent advancements of state space models (SSMs) in control theory, which provide a robust framework for modeling and controlling dynamic systems, we introduce EchoMamba4Rec. Control theory emphasizes the use of SSMs for managing long-range dependencies and maintaining inferential efficiency through structured state matrices. EchoMamba4Rec leverages these control relationships in sequential recommendation and integrates bi-directional processing with frequency-domain filtering to capture complex patterns and dependencies in user interaction data more effectively. Our model benefits from the ability of state space models (SSMs) to learn and perform parallel computations, significantly enhancing computational efficiency and scalability. It features a bi-directional Mamba module that incorporates both forward and reverse Mamba components, leveraging information from both past and future interactions. Additionally, a filter layer operates in the frequency domain using learnable Fast Fourier Transform (FFT) and learnable filters, followed by an inverse FFT to refine item embeddings and reduce noise. We also integrate Gate Linear Units (GLU) to dynamically control information flow, enhancing the model's expressiveness and training stability. Experimental results demonstrate that EchoMamba significantly outperforms existing models, providing more accurate and personalized recommendations.",True,True,"Wang, Yuda and He, Xuxin and Zhu, Shengxin",2024.0,,,,arXiv preprint arXiv:2406.02638,"EchoMamba4Rec: Harmonizing Bidirectional State Space Models with Spectral Filtering for Advanced Sequential Recommendation",EchoMamba4Rec: Harmonizing Bidirectional State Space ...,https://www.researchgate.net/publication/381190112_EchoMamba4Rec_Harmonizing_Bidirectional_State_Space_Models_with_Spectral_Filtering_for_Advanced_Sequential_Recommendation,EchoMamba4Rec leverages these control relationships in sequential recommendation and integrates bi-directional processing with frequency-domain "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,cao2024mamba4kt,\cite{cao2024mamba4kt},Mamba4KT:An Efficient and Effective Mamba-based Knowledge Tracing Model,http://arxiv.org/abs/2405.16542v1,"Knowledge tracing (KT) enhances student learning by leveraging past performance to predict future performance. Current research utilizes models based on attention mechanisms and recurrent neural network structures to capture long-term dependencies and correlations between exercises, aiming to improve model accuracy. Due to the growing amount of data in smart education scenarios, this poses a challenge in terms of time and space consumption for knowledge tracing models. However, existing research often overlooks the efficiency of model training and inference and the constraints of training resources. Recognizing the significance of prioritizing model efficiency and resource usage in knowledge tracing, we introduce Mamba4KT. This novel model is the first to explore enhanced efficiency and resource utilization in knowledge tracing. We also examine the interpretability of the Mamba structure both sequence-level and exercise-level to enhance model interpretability. Experimental findings across three public datasets demonstrate that Mamba4KT achieves comparable prediction accuracy to state-of-the-art models while significantly improving training and inference efficiency and resource utilization. As educational data continues to grow, our work suggests a promising research direction for knowledge tracing that improves model prediction accuracy, model efficiency, resource utilization, and interpretability simultaneously.",True,True,"Cao, Yang and Zhang, Wei",2024.0,,,,arXiv preprint arXiv:2405.16542,Mamba4KT:An Efficient and Effective Mamba-based Knowledge Tracing Model,Mamba4KT:An Efficient and Effective Mamba-based ...,https://arxiv.org/html/2405.16542v1,"We introduce a knowledge tracing model Mamba4KT based on selective state space model, which improves the training and inference efficiency and" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,liu2024bidirectional,\cite{liu2024bidirectional},Bidirectional gated mamba for sequential recommendation,,,True,False,"Liu, Ziwei and Liu, Qidong and Wang, Yejing and Wang, Wanyu and Jia, Pengyue and Wang, Maolin and Liu, Zitao and Chang, Yi and Zhao, Xiangyu",2024.0,,,,arXiv preprint arXiv:2408.11451,Bidirectional gated mamba for sequential recommendation,Bidirectional Gated Mamba for Sequential Recommendation,https://openreview.net/forum?id=xaJx6aRwRG,"Bidirectional Gated Mamba for Sequential Recommendation | OpenReview Bidirectional Gated Mamba for Sequential Recommendation To overcome these issues, we introduce a new framework named Selective Gated Mamba (SIGMA) for Sequential Recommendation. This framework leverages a Partially Flipped Mamba (PF-Mamba) to construct a bidirectional architecture specifically tailored to improve contextual modeling. Additionally, an input-sensitive Dense Selective Gate (DS Gate) is employed to optimize directional weights and enhance the processing of sequential information in PF-Mamba. * About OpenReview To submit a bug report or feature request, you can use the official OpenReview GitHub repository: * About OpenReview To submit a bug report or feature request, you can use the official OpenReview GitHub repository:" "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,yang2024uncovering,\cite{yang2024uncovering},"Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation",http://arxiv.org/abs/2403.16371v1,"Sequential Recommenders have been widely applied in various online services, aiming to model users' dynamic interests from their sequential interactions. With users increasingly engaging with online platforms, vast amounts of lifelong user behavioral sequences have been generated. However, existing sequential recommender models often struggle to handle such lifelong sequences. The primary challenges stem from computational complexity and the ability to capture long-range dependencies within the sequence. Recently, a state space model featuring a selective mechanism (i.e., Mamba) has emerged. In this work, we investigate the performance of Mamba for lifelong sequential recommendation (i.e., length>=2k). More specifically, we leverage the Mamba block to model lifelong user sequences selectively. We conduct extensive experiments to evaluate the performance of representative sequential recommendation models in the setting of lifelong sequences. Experiments on two real-world datasets demonstrate the superiority of Mamba. We found that RecMamba achieves performance comparable to the representative model while significantly reducing training duration by approximately 70% and memory costs by 80%. Codes and data are available at \url{https://github.com/nancheng58/RecMamba}.",True,True,"Yang, Jiyuan and Li, Yuanzi and Zhao, Jingyu and Wang, Hanbing and Ma, Muyang and Ma, Jun and Ren, Zhaochun and Zhang, Mengqi and Xin, Xin and Chen, Zhumin and others",2024.0,,,,arXiv preprint arXiv:2403.16371,"Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation",[PDF] Uncovering Selective State Space Model's Capabilities in Lifelong ...,https://arxiv.org/pdf/2403.16371,We conduct extensive ex- periments to evaluate the performance of representative sequential recommendation models in the setting of lifelong "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,Visionzhu,\cite{Visionzhu},"Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model",http://arxiv.org/abs/2401.09417v3,"Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., the Mamba deep learning model, have shown great potential for long sequence modeling. Meanwhile building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8$\times$ faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on images with a resolution of 1248$\times$1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to be the next-generation backbone for vision foundation models. Code is available at https://github.com/hustvl/Vim.",True,True,"Zhu, Lianghui and Liao, Bencheng and Zhang, Qian and Wang, Xinlong and Liu, Wenyu and Wang, Xinggang",2024.0,,,,arXiv preprint arXiv:2401.09417,"Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model",Vision Mamba: Efficient Visual Representation Learning with ... - arXiv,https://arxiv.org/abs/2401.09417,"Title:Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model View a PDF of the paper titled Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, by Lianghui Zhu and 5 other authors In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. View a PDF of the paper titled Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, by Lianghui Zhu and 5 other authors - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle " "STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation",2505.03484v1,mamba4rec,\cite{mamba4rec},"Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models",http://arxiv.org/abs/2403.03900v2,"Sequential recommendation aims to estimate the dynamic user preferences and sequential dependencies among historical user behaviors. Although Transformer-based models have proven to be effective for sequential recommendation, they suffer from the inference inefficiency problem stemming from the quadratic computational complexity of attention operators, especially for long behavior sequences. Inspired by the recent success of state space models (SSMs), we propose Mamba4Rec, which is the first work to explore the potential of selective SSMs for efficient sequential recommendation. Built upon the basic Mamba block which is a selective SSM with an efficient hardware-aware parallel algorithm, we design a series of sequential modeling techniques to further promote model performance while maintaining inference efficiency. Through experiments on public datasets, we demonstrate how Mamba4Rec effectively tackles the effectiveness-efficiency dilemma, outperforming both RNN- and attention-based baselines in terms of both effectiveness and efficiency. The code is available at https://github.com/chengkai-liu/Mamba4Rec.",True,True,"Liu, Chengkai and Lin, Jianghao and Wang, Jianling and Liu, Hanzhou and Caverlee, James",2024.0,,,,arXiv preprint arXiv:2403.03900,"Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models",Towards Efficient Sequential Recommendation with ...,https://arxiv.org/pdf/2403.03900,"by C Liu · 2024 · Cited by 66 — We describe how. Mamba4Rec constructs a sequential recommendation model through an embedding layer, selective state space models, and a prediction layer." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,perozziDeepwalk2014,\cite{perozziDeepwalk2014},Deep{W}alk: Online learning of social representations,,,True,False,"Perozzi, Bryan and Al-Rfou, Rami and Skiena, Steven",2014.0,,,,,Deep{W}alk: Online learning of social representations,DeepWalk: online learning of social representations,https://dl.acm.org/doi/10.1145/2623330.2623732,"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,groverNode2vecScalableFeature2016,\cite{groverNode2vecScalableFeature2016},node2vec: Scalable Feature Learning for Networks,http://arxiv.org/abs/1607.00653v1,"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",True,True,"Grover, Aditya and Leskovec, Jure",2016.0,,,,,node2vec: Scalable Feature Learning for Networks,node2vec: Scalable Feature Learning for Networks,http://arxiv.org/pdf/1607.00653v1,"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,huangGraphRecurrentNetworks2019,\cite{huangGraphRecurrentNetworks2019},Graph recurrent networks with attributed random walks,,,True,False,"Huang, Xiao and Song, Qingquan and Li, Yuening and Hu, Xia",2019.0,,,,,Graph recurrent networks with attributed random walks,[PDF] Attributed Random Walks for Graph Recurrent Networks,https://www4.comp.polyu.edu.hk/~xiaohuang/docs/Xiao_KDD19_slides.pdf,Apply random walks on attributed networks to boost deep node representation learning. boost. Page 7. Graph Recurrent Networks with Attributed. Random Walks …… Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,nikolentzosRandomwalkgraphneuralnetworks2020,\cite{nikolentzosRandomwalkgraphneuralnetworks2020},Random walk graph neural networks,,,True,False,"Nikolentzos, Giannis and Vazirgiannis, Michalis",2020.0,,,,,Random walk graph neural networks,Random Walk Graph Neural Networks,https://proceedings.neurips.cc/paper/2020/file/ba95d78a7c942571185308775a97a3a0-Paper.pdf,"by G Nikolentzos · 2020 · Cited by 160 — In this paper, we propose a more intuitive and transparent architecture for graph-structured data, so-called Random Walk. Graph Neural Network (RWNN). The first" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,jinRawgnn2022,\cite{jinRawgnn2022},Raw-{GNN}: Random walk aggregation based graph neural network,,,True,False,"Jin, Di and Wang, Rui and Ge, Meng and He, Dongxiao and Li, Xiang and Lin, Wei and Zhang, Weixiong",2022.0,,,,arXiv:2206.13953,Raw-{GNN}: Random walk aggregation based graph neural network,RAndom Walk Aggregation based Graph Neural Network,https://www.ijcai.org/proceedings/2022/0293.pdf,"by D Jin · Cited by 59 — Here, we introduce a novel aggregation mechanism and develop a RAn- dom Walk Aggregation-based Graph Neural Net- work (called RAW-GNN) method. The proposed." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,wangNonConvGNN2024,\cite{wangNonConvGNN2024},Non-convolutional Graph Neural Networks,http://arxiv.org/abs/2408.00165v3,"Rethink convolution-based graph neural networks (GNN) -- they characteristically suffer from limited expressiveness, over-smoothing, and over-squashing, and require specialized sparse kernels for efficient computation. Here, we design a simple graph learning module entirely free of convolution operators, coined random walk with unifying memory (RUM) neural network, where an RNN merges the topological and semantic graph features along the random walks terminating at each node. Relating the rich literature on RNN behavior and graph topology, we theoretically show and experimentally verify that RUM attenuates the aforementioned symptoms and is more expressive than the Weisfeiler-Lehman (WL) isomorphism test. On a variety of node- and graph-level classification and regression tasks, RUM not only achieves competitive performance, but is also robust, memory-efficient, scalable, and faster than the simplest convolutional GNNs.",True,True,"Wang, Yuanqing and Cho, Kyunghyun",2024.0,,,,arXiv:2408.00165,Non-convolutional Graph Neural Networks,[2408.00165] Non-convolutional Graph Neural Networks,https://arxiv.org/abs/2408.00165,"by Y Wang · 2024 · Cited by 12 — We design a simple graph learning module entirely free of convolution operators, coined random walk with unifying memory (RUM) neural network." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,kipfSemiSupervisedClassificationGraph2017,\cite{kipfSemiSupervisedClassificationGraph2017},Semi-Supervised Classification with Graph Convolutional Networks,http://arxiv.org/abs/1609.02907v4,"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",True,True,"Kipf, Thomas N and Welling, Max",2016.0,,,,,Semi-Supervised Classification with Graph Convolutional Networks,Semi-Supervised Classification with Graph Convolutional Networks,https://openreview.net/forum?id=SJU4ayYgl,"Semi-Supervised Classification with Graph Convolutional Networks | OpenReview Semi-Supervised Classification with Graph Convolutional Networks Abstract:We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. TL;DR:Semi-supervised classification with a CNN model for graphs. About OpenReview To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Select a topic or type what you need help with Cancel Send * Sponsors About OpenReview Sponsors To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Select a topic or type what you need help with Cancel Send We gratefully acknowledge the support of theOpenReview Sponsors." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,wuSimplifyingGraphConvolutional2019,\cite{wuSimplifyingGraphConvolutional2019},Simplifying Graph Convolutional Networks,http://arxiv.org/abs/1902.07153v2,"Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.",True,True,"Wu, Felix and Souza, Amauri and Zhang, Tianyi and Fifty, Christopher and Yu, Tao and Weinberger, Kilian",2019.0,,,,,Simplifying Graph Convolutional Networks,Simplifying Graph Convolutional Networks,http://arxiv.org/pdf/1902.07153v2,"Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,hamiltonInductiveRepresentationLearning2017,\cite{hamiltonInductiveRepresentationLearning2017},Inductive Representation Learning in Large Attributed Graphs,http://arxiv.org/abs/1710.09471v2,"Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks.",True,True,"Hamilton, Will and Ying, Zhitao and Leskovec, Jure",2017.0,,,,,Inductive Representation Learning in Large Attributed Graphs,Inductive Representation Learning in Large Attributed Graphs,http://arxiv.org/pdf/1710.09471v2,"Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,gilmerNeuralMessagePassing2017,\cite{gilmerNeuralMessagePassing2017},Neural Message Passing for Quantum Chemistry,http://arxiv.org/abs/1704.01212v2,"Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels.",True,True,"Gilmer, Justin and Schoenholz, Samuel S. and Riley, Patrick F. and Vinyals, Oriol and Dahl, George E.",2017.0,,,,,Neural Message Passing for Quantum Chemistry,Neural Message Passing for Quantum Chemistry,http://arxiv.org/pdf/1704.01212v2,"Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,velickovicDeepGraphInfomax2018,\cite{velickovicDeepGraphInfomax2018},Deep Graph Infomax,http://arxiv.org/abs/1809.10341v2,"We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.",True,True,"Velickovic, Petar and Fedus, William and Hamilton, William L and Li{\`o}, Pietro and Bengio, Yoshua and Hjelm, R Devon",2019.0,,,,,Deep Graph Infomax,[1809.10341] Deep Graph Infomax - arXiv,https://arxiv.org/abs/1809.10341,"Abstract:We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,xuHowPowerfulAre2019,\cite{xuHowPowerfulAre2019},How Powerful are Graph Neural Networks?,http://arxiv.org/abs/1810.00826v3,"Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.",True,True,"Xu, Keyulu and Hu, Weihua and Leskovec, Jure and Jegelka, Stefanie",2018.0,,,,arXiv:1810.00826,How Powerful are Graph Neural Networks?,How Powerful are Graph Neural Networks?,http://arxiv.org/pdf/1810.00826v3,"Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,defferrardConvolutionalNeuralNetworks2016,\cite{defferrardConvolutionalNeuralNetworks2016},Convolutional neural networks on graphs with fast localized spectral filtering,,,True,False,"Defferrard, Micha{\""e}l and Bresson, Xavier and Vandergheynst, Pierre",2016.0,,,,,Convolutional neural networks on graphs with fast localized spectral filtering,Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering,http://arxiv.org/pdf/1606.09375v3,"In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,chienAdaptiveUniversalGeneralized2021,\cite{chienAdaptiveUniversalGeneralized2021},Adaptive Universal Generalized PageRank Graph Neural Network,http://arxiv.org/abs/2006.07988v6,"In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility and integrate them in a manner that is also universal. Here, universality refers to independence on homophily or heterophily graph assumptions. We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic. Learned GPR weights automatically adjust to the node label pattern, irrelevant on the type of initialization, and thereby guarantee excellent learning performance for label patterns that are usually hard to handle. Furthermore, they allow one to avoid feature over-smoothing, a process which renders feature information nondiscriminative, without requiring the network to be shallow. Our accompanying theoretical analysis of the GPR-GNN method is facilitated by novel synthetic benchmark datasets generated by the so-called contextual stochastic block model. We also compare the performance of our GNN architecture with that of several state-of-the-art GNNs on the problem of node-classification, using well-known benchmark homophilic and heterophilic datasets. The results demonstrate that GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.",True,True,"Chien, Eli and Peng, Jianhao and Li, Pan and Milenkovic, Olgica",2020.0,,,,arXiv:2006.07988,Adaptive Universal Generalized PageRank Graph Neural Network,Adaptive Universal Generalized PageRank Graph Neural Network,http://arxiv.org/pdf/2006.07988v6,"In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility and integrate them in a manner that is also universal. Here, universality refers to independence on homophily or heterophily graph assumptions. We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic. Learned GPR weights automatically adjust to the node label pattern, irrelevant on the type of initialization, and thereby guarantee excellent learning performance for label patterns that are usually hard to handle. Furthermore, they allow one to avoid feature over-smoothing, a process which renders feature information nondiscriminative, without requiring the network to be shallow. Our accompanying theoretical analysis of the GPR-GNN method is facilitated by novel synthetic benchmark datasets generated by the so-called contextual stochastic block model. We also compare the performance of our GNN architecture with that of several state-of-the-art GNNs on the problem of node-classification, using well-known benchmark homophilic and heterophilic datasets. The results demonstrate that GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,heBernNetLearningArbitrary2021,\cite{heBernNetLearningArbitrary2021},Bern{N}et: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation,,,True,False,"He, Mingguo and Wei, Zhewei and Huang, zengfeng and Xu, Hongteng",2021.0,,,,,Bern{N}et: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation,[PDF] Learning Arbitrary Graph Spectral Filters via Bernstein Approximation,https://proceedings.neurips.cc/paper/2021/file/76f1cfd7754a6e4fc3281bcccb3d0902-Paper.pdf,"BernNet is a graph neural network that learns arbitrary graph spectral filters using Bernstein polynomial approximation, designing spectral properties by" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,chenRevisitingGraphBased2020,\cite{chenRevisitingGraphBased2020},"Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach",http://arxiv.org/abs/2001.10167v1,"Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models by iteratively stacking multiple layers of convolution aggregation operations and non-linear activation operations. Recently, in Collaborative Filtering (CF) based Recommender Systems (RS), by treating the user-item interaction behavior as a bipartite graph, some researchers model higher-layer collaborative signals with GCNs. These GCN based recommender models show superior performance compared to traditional works. However, these models suffer from training difficulty with non-linear activations for large user-item graphs. Besides, most GCN based models could not model deeper layers due to the over smoothing effect with the graph convolution operation. In this paper, we revisit GCN based CF models from two aspects. First, we empirically show that removing non-linearities would enhance recommendation performance, which is consistent with the theories in simple graph convolutional networks. Second, we propose a residual network structure that is specifically designed for CF with user-item interaction modeling, which alleviates the over smoothing problem in graph convolution aggregation operation with sparse user-item interaction data. The proposed model is a linear model and it is easy to train, scale to large datasets, and yield better efficiency and effectiveness on two real datasets. We publish the source code at https://github.com/newlei/LRGCCF.",True,True,"Chen, Lei and Wu, Le and Hong, Richang and Zhang, Kun and Wang, Meng",2020.0,,,,,"Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach",Revisiting Graph Based Collaborative Filtering: A Linear Residual ...,https://ojs.aaai.org/index.php/AAAI/article/view/5330,"In this paper, we revisit GCN based CF models from two aspects. First, we empirically show that removing non-linearities would enhance recommendation" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,wangNeuralGraphCollaborative2019,\cite{wangNeuralGraphCollaborative2019},Neural Graph Collaborative Filtering,http://arxiv.org/abs/1905.08108v2,"Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering.",True,True,"Wang, Xiang and He, Xiangnan and Wang, Meng and Feng, Fuli and Chua, Tat-Seng",2019.0,,,,,Neural Graph Collaborative Filtering,Neural Graph Collaborative Filtering,http://arxiv.org/pdf/1905.08108v2,"Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,heLightGCNSimplifyingPowering2020,\cite{heLightGCNSimplifyingPowering2020},Light{GCN}: Simplifying and Powering Graph Convolution Network for Recommendation,,,True,False,"He, Xiangnan and Deng, Kuan and Wang, Xiang and Li, Yan and Zhang, YongDong and Wang, Meng",2020.0,,,,,Light{GCN}: Simplifying and Powering Graph Convolution Network for Recommendation,[PDF] LightGCN: Simplifying and Powering Graph Convolution Network for ...,https://arxiv.org/pdf/2002.02126,"In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation. We propose a new model named" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,maoUltraGCNUltraSimplification2021,\cite{maoUltraGCNUltraSimplification2021},Ultra{GCN}: Ultra Simplification of Graph Convolutional Networks for Recommendation,,,True,False,"Mao, Kelong and Zhu, Jieming and Xiao, Xi and Lu, Biao and Wang, Zhaowei and He, Xiuqiang",2021.0,,,,,Ultra{GCN}: Ultra Simplification of Graph Convolutional Networks for Recommendation,UltraGCN: Ultra Simplification of Graph Convolutional Networks for ...,https://arxiv.org/abs/2110.15114,"View a PDF of the paper titled UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation, by Kelong Mao and 5 other authors In this paper, we take one step further to propose an ultra-simplified formulation of GCNs (dubbed UltraGCN), which skips infinite layers of message passing for efficient recommendation. View a PDF of the paper titled UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation, by Kelong Mao and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle " Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,heSGCF2023,\cite{heSGCF2023},Simplifying graph-based collaborative filtering for recommendation,,,True,False,"He, Li and Wang, Xianzhi and Wang, Dingxian and Zou, Haoyuan and Yin, Hongzhi and Xu, Guandong",2023.0,,,,,Simplifying graph-based collaborative filtering for recommendation,Simplifying Graph-based Collaborative Filtering for ...,https://opus.lib.uts.edu.au/bitstream/10453/164889/4/Simplifying%20Graph-based%20Collaborative%20Filtering%20for%20Recommendation.pdf,"by L He · 2023 · Cited by 28 — First, we remove non-linearities to en- hance recommendation performance, which is consistent with the theories in simple graph convolutional networks. Second," Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,sunNeighborInteractionAware2020,\cite{sunNeighborInteractionAware2020},Neighbor Interaction Aware Graph Convolution Networks for Recommendation,,,True,False,"Sun, Jianing and Zhang, Yingxue and Guo, Wei and Guo, Huifeng and Tang, Ruiming and He, Xiuqiang and Ma, Chen and Coates, Mark",2020.0,,,,,Neighbor Interaction Aware Graph Convolution Networks for Recommendation,Neighbor Interaction Aware Graph Convolution Networks ...,https://dl.acm.org/doi/10.1145/3397271.3401123,"Neighbor Interaction Aware Graph Convolution Networks for Recommendation | Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval * Hotjar 3Learn more about this providerImage 8**_hjSession_#**Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.**Maximum Storage Duration**: 1 day**Type**: HTTP Cookie **_hjSessionUser_#**Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.**Maximum Storage Duration**: 1 year**Type**: HTTP Cookie **_hjTLDTest**Registers statistical data on users' behaviour on the website. * Zhou Z Yan X(2025)Multi-modal Recommendation based on Graph Neural Networks 2025 4th International Symposium on Computer Applications and Information Technology (ISCAIT)10.1109/ISCAIT64916.2025.11010536(219-223)Online publication date: 21-Mar-2025https://doi.org/10.1109/ISCAIT64916.2025.11010536 * Zhang M Liao X Wang X Wang X Jin L(2025)Multi-neighbor social recommendation with attentional graph convolutional network Data Mining and Knowledge Discovery 10.1007/s10618-025-01094-7**39**:3 Online publication date: 20-Mar-2025https://doi.org/10.1007/s10618-025-01094-7 " Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,wangDisentangledGraphCollaborative2020,\cite{wangDisentangledGraphCollaborative2020},Disentangled Graph Collaborative Filtering,http://arxiv.org/abs/2007.01764v1,"Learning informative representations of users and items from the interaction data is of crucial importance to collaborative filtering (CF). Present embedding functions exploit user-item relationships to enrich the representations, evolving from a single user-item instance to the holistic interaction graph. Nevertheless, they largely model the relationships in a uniform manner, while neglecting the diversity of user intents on adopting the items, which could be to pass time, for interest, or shopping for others like families. Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations. In this work, we pay special attention to user-item relationships at the finer granularity of user intents. We hence devise a new model, Disentangled Graph Collaborative Filtering (DGCF), to disentangle these factors and yield disentangled representations. Specifically, by modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations. Meanwhile, we encourage independence of different intents. This leads to disentangled representations, effectively distilling information pertinent to each intent. We conduct extensive experiments on three benchmark datasets, and DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE. Further analyses offer insights into the advantages of DGCF on the disentanglement of user intents and interpretability of representations. Our codes are available in https://github.com/xiangwang1223/disentangled_graph_collaborative_filtering.",True,True,"Wang, Xiang and Jin, Hongye and Zhang, An and He, Xiangnan and Xu, Tong and Chua, Tat-Seng",2020.0,,,,,Disentangled Graph Collaborative Filtering,Disentangled Graph Collaborative Filtering,http://arxiv.org/pdf/2007.01764v1,"Learning informative representations of users and items from the interaction data is of crucial importance to collaborative filtering (CF). Present embedding functions exploit user-item relationships to enrich the representations, evolving from a single user-item instance to the holistic interaction graph. Nevertheless, they largely model the relationships in a uniform manner, while neglecting the diversity of user intents on adopting the items, which could be to pass time, for interest, or shopping for others like families. Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations. In this work, we pay special attention to user-item relationships at the finer granularity of user intents. We hence devise a new model, Disentangled Graph Collaborative Filtering (DGCF), to disentangle these factors and yield disentangled representations. Specifically, by modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations. Meanwhile, we encourage independence of different intents. This leads to disentangled representations, effectively distilling information pertinent to each intent. We conduct extensive experiments on three benchmark datasets, and DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE. Further analyses offer insights into the advantages of DGCF on the disentanglement of user intents and interpretability of representations. Our codes are available in https://github.com/xiangwang1223/disentangled_graph_collaborative_filtering." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,liuInterestawareMessagePassingGCN2021,\cite{liuInterestawareMessagePassingGCN2021},Interest-aware Message-Passing GCN for Recommendation,http://arxiv.org/abs/2102.10044v2,"Graph Convolution Networks (GCNs) manifest great potential in recommendation. This is attributed to their capability on learning good user and item embeddings by exploiting the collaborative signals from the high-order neighbors. Like other GCN models, the GCN based recommendation models also suffer from the notorious over-smoothing problem - when stacking more layers, node embeddings become more similar and eventually indistinguishable, resulted in performance degradation. The recently proposed LightGCN and LR-GCN alleviate this problem to some extent, however, we argue that they overlook an important factor for the over-smoothing problem in recommendation, that is, high-order neighboring users with no common interests of a user can be also involved in the user's embedding learning in the graph convolution operation. As a result, the multi-layer graph convolution will make users with dissimilar interests have similar embeddings. In this paper, we propose a novel Interest-aware Message-Passing GCN (IMP-GCN) recommendation model, which performs high-order graph convolution inside subgraphs. The subgraph consists of users with similar interests and their interacted items. To form the subgraphs, we design an unsupervised subgraph generation module, which can effectively identify users with common interests by exploiting both user feature and graph structure. To this end, our model can avoid propagating negative information from high-order neighbors into embedding learning. Experimental results on three large-scale benchmark datasets show that our model can gain performance improvement by stacking more layers and outperform the state-of-the-art GCN-based recommendation models significantly.",True,True,"Liu, Fan and Cheng, Zhiyong and Zhu, Lei and Gao, Zan and Nie, Liqiang",2021.0,,,,,Interest-aware Message-Passing GCN for Recommendation,Interest-aware Message-Passing GCN for Recommendation,https://dl.acm.org/doi/10.1145/3442381.3449986,"In this paper, we propose a novel Interest-aware Message-Passing GCN (IMP-GCN) recommendation model, which performs high-order graph convolution inside" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,kongLinearNonLinearThat2022,\cite{kongLinearNonLinearThat2022},"Linear, or Non-Linear, That is the Question!",http://arxiv.org/abs/2111.07265v2,"There were fierce debates on whether the non-linear embedding propagation of GCNs is appropriate to GCN-based recommender systems. It was recently found that the linear embedding propagation shows better accuracy than the non-linear embedding propagation. Since this phenomenon was discovered especially in recommender systems, it is required that we carefully analyze the linearity and non-linearity issue. In this work, therefore, we revisit the issues of i) which of the linear or non-linear propagation is better and ii) which factors of users/items decide the linearity/non-linearity of the embedding propagation. We propose a novel Hybrid Method of Linear and non-linEar collaborative filTering method (HMLET, pronounced as Hamlet). In our design, there exist both linear and non-linear propagation steps, when processing each user or item node, and our gating module chooses one of them, which results in a hybrid model of the linear and non-linear GCN-based collaborative filtering (CF). The proposed model yields the best accuracy in three public benchmark datasets. Moreover, we classify users/items into the following three classes depending on our gating modules' selections: Full-Non-Linearity (FNL), Partial-Non-Linearity (PNL), and Full-Linearity (FL). We found that there exist strong correlations between nodes' centrality and their class membership, i.e., important user/item nodes exhibit more preferences towards the non-linearity during the propagation steps. To our knowledge, we are the first who design a hybrid method and report the correlation between the graph centrality and the linearity/non-linearity of nodes. All HMLET codes and datasets are available at: https://github.com/qbxlvnf11/HMLET.",True,True,"Kong, Taeyong and Kim, Taeri and Jeon, Jinsung and Choi, Jeongwhan and Lee, Yeon-Chang and Park, Noseong and Kim, Sang-Wook",2022.0,,,,,"Linear, or Non-Linear, That is the Question!","[2111.07265] Linear, or Non-Linear, That is the Question! - arXiv",https://arxiv.org/abs/2111.07265,It was recently found that the linear embedding propagation shows better accuracy than the non-linear embedding propagation. Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,fanGraphTrendFiltering2022,\cite{fanGraphTrendFiltering2022},Graph Trend Filtering Networks for Recommendations,http://arxiv.org/abs/2108.05552v2,"Recommender systems aim to provide personalized services to users and are playing an increasingly important role in our daily lives. The key of recommender systems is to predict how likely users will interact with items based on their historical online behaviors, e.g., clicks, add-to-cart, purchases, etc. To exploit these user-item interactions, there are increasing efforts on considering the user-item interactions as a user-item bipartite graph and then performing information propagation in the graph via Graph Neural Networks (GNNs). Given the power of GNNs in graph representation learning, these GNNs-based recommendation methods have remarkably boosted the recommendation performance. Despite their success, most existing GNNs-based recommender systems overlook the existence of interactions caused by unreliable behaviors (e.g., random/bait clicks) and uniformly treat all the interactions, which can lead to sub-optimal and unstable performance. In this paper, we investigate the drawbacks (e.g., non-adaptive propagation and non-robustness) of existing GNN-based recommendation methods. To address these drawbacks, we introduce a principled graph trend collaborative filtering method and propose the Graph Trend Filtering Networks for recommendations (GTN) that can capture the adaptive reliability of the interactions. Comprehensive experiments and ablation studies are presented to verify and understand the effectiveness of the proposed framework. Our implementation based on PyTorch is available at https://github.com/wenqifan03/GTN-SIGIR2022.",True,True,"Fan, Wenqi and Liu, Xiaorui and Jin, Wei and Zhao, Xiangyu and Tang, Jiliang and Li, Qing",2022.0,,,,,Graph Trend Filtering Networks for Recommendations,Graph Trend Filtering Networks for Recommendations,http://arxiv.org/pdf/2108.05552v2,"Recommender systems aim to provide personalized services to users and are playing an increasingly important role in our daily lives. The key of recommender systems is to predict how likely users will interact with items based on their historical online behaviors, e.g., clicks, add-to-cart, purchases, etc. To exploit these user-item interactions, there are increasing efforts on considering the user-item interactions as a user-item bipartite graph and then performing information propagation in the graph via Graph Neural Networks (GNNs). Given the power of GNNs in graph representation learning, these GNNs-based recommendation methods have remarkably boosted the recommendation performance. Despite their success, most existing GNNs-based recommender systems overlook the existence of interactions caused by unreliable behaviors (e.g., random/bait clicks) and uniformly treat all the interactions, which can lead to sub-optimal and unstable performance. In this paper, we investigate the drawbacks (e.g., non-adaptive propagation and non-robustness) of existing GNN-based recommendation methods. To address these drawbacks, we introduce a principled graph trend collaborative filtering method and propose the Graph Trend Filtering Networks for recommendations (GTN) that can capture the adaptive reliability of the interactions. Comprehensive experiments and ablation studies are presented to verify and understand the effectiveness of the proposed framework. Our implementation based on PyTorch is available at https://github.com/wenqifan03/GTN-SIGIR2022." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,guoJGCF2023,\cite{guoJGCF2023},"On Manipulating Signals of User-Item Graph: A Jacobi Polynomial-based Graph Collaborative Filtering",http://arxiv.org/abs/2306.03624v1,"Collaborative filtering (CF) is an important research direction in recommender systems that aims to make recommendations given the information on user-item interactions. Graph CF has attracted more and more attention in recent years due to its effectiveness in leveraging high-order information in the user-item bipartite graph for better recommendations. Specifically, recent studies show the success of graph neural networks (GNN) for CF is attributed to its low-pass filtering effects. However, current researches lack a study of how different signal components contributes to recommendations, and how to design strategies to properly use them well. To this end, from the view of spectral transformation, we analyze the important factors that a graph filter should consider to achieve better performance. Based on the discoveries, we design JGCF, an efficient and effective method for CF based on Jacobi polynomial bases and frequency decomposition strategies. Extensive experiments on four widely used public datasets show the effectiveness and efficiency of the proposed methods, which brings at most 27.06% performance gain on Alibaba-iFashion. Besides, the experimental results also show that JGCF is better at handling sparse datasets, which shows potential in making recommendations for cold-start users.",True,True,"Guo, Jiayan and Du, Lun and Chen, Xu and Ma, Xiaojun and Fu, Qiang and Han, Shi and Zhang, Dongmei and Zhang, Yan",2023.0,,,,,"On Manipulating Signals of User-Item Graph: A Jacobi Polynomial-based Graph Collaborative Filtering",A Jacobi Polynomial-based Graph Collaborative Filtering,https://www.bohrium.com/paper-details/on-manipulating-signals-of-user-item-graph-a-jacobi-polynomial-based-graph-collaborative-filtering/873226422896820882-108611,On Manipulating Signals of User-Item Graph: A Jacobi Polynomial-based Graph Collaborative Filtering ... 2025-06-16. ACM Transactions on Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,wangCollaborationAwareGraphConvolutional2023,\cite{wangCollaborationAwareGraphConvolutional2023},Collaboration-Aware Graph Convolutional Network for Recommender Systems,http://arxiv.org/abs/2207.06221v4,"Graph Neural Networks (GNNs) have been successfully adopted in recommender systems by virtue of the message-passing that implicitly captures collaborative effect. Nevertheless, most of the existing message-passing mechanisms for recommendation are directly inherited from GNNs without scrutinizing whether the captured collaborative effect would benefit the prediction of user preferences. In this paper, we first analyze how message-passing captures the collaborative effect and propose a recommendation-oriented topological metric, Common Interacted Ratio (CIR), which measures the level of interaction between a specific neighbor of a node with the rest of its neighbors. After demonstrating the benefits of leveraging collaborations from neighbors with higher CIR, we propose a recommendation-tailored GNN, Collaboration-Aware Graph Convolutional Network (CAGCN), that goes beyond 1-Weisfeiler-Lehman(1-WL) test in distinguishing non-bipartite-subgraph-isomorphic graphs. Experiments on six benchmark datasets show that the best CAGCN variant outperforms the most representative GNN-based recommendation model, LightGCN, by nearly 10% in Recall@20 and also achieves around 80% speedup. Our code is publicly available at https://github.com/YuWVandy/CAGCN.",True,True,"Wang, Yu and Zhao, Yuying and Zhang, Yi and Derr, Tyler",2023.0,,,,,Collaboration-Aware Graph Convolutional Network for Recommender Systems,Collaboration-Aware Graph Convolutional Network for ...,https://dl.acm.org/doi/abs/10.1145/3543507.3583229,"by Y Wang · 2023 · Cited by 70 — We propose a recommendation-tailored GNN, Collaboration-Aware Graph Convolutional Network (CAGCN), that goes beyond 1-Weisfeiler-Lehman(1-WL) test." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,zhuGiffCF2024,\cite{zhuGiffCF2024},Graph Signal Diffusion Model for Collaborative Filtering,http://arxiv.org/abs/2311.08744v3,"Collaborative filtering is a critical technique in recommender systems. It has been increasingly viewed as a conditional generative task for user feedback data, where newly developed diffusion model shows great potential. However, existing studies on diffusion model lack effective solutions for modeling implicit feedback. Particularly, the standard isotropic diffusion process overlooks correlation between items, misaligned with the graphical structure of the interaction space. Meanwhile, Gaussian noise destroys personalized information in a user's interaction vector, causing difficulty in its reconstruction. In this paper, we adapt standard diffusion model and propose a novel Graph Signal Diffusion Model for Collaborative Filtering (named GiffCF). To better represent the correlated distribution of user-item interactions, we define a generalized diffusion process using heat equation on the item-item similarity graph. Our forward process smooths interaction signals with an advanced family of graph filters, introducing the graph adjacency as beneficial prior knowledge for recommendation. Our reverse process iteratively refines and sharpens latent signals in a noise-free manner, where the updates are conditioned on the user's history and computed from a carefully designed two-stage denoiser, leading to high-quality reconstruction. Finally, through extensive experiments, we show that GiffCF effectively leverages the advantages of both diffusion model and graph signal processing, and achieves state-of-the-art performance on three benchmark datasets.",True,True,"Zhu, Yunqin and Wang, Chao and Zhang, Qi and Xiong, Hui",2024.0,,,,,Graph Signal Diffusion Model for Collaborative Filtering,Graph Signal Diffusion Model for Collaborative Filtering,http://arxiv.org/pdf/2311.08744v3,"Collaborative filtering is a critical technique in recommender systems. It has been increasingly viewed as a conditional generative task for user feedback data, where newly developed diffusion model shows great potential. However, existing studies on diffusion model lack effective solutions for modeling implicit feedback. Particularly, the standard isotropic diffusion process overlooks correlation between items, misaligned with the graphical structure of the interaction space. Meanwhile, Gaussian noise destroys personalized information in a user's interaction vector, causing difficulty in its reconstruction. In this paper, we adapt standard diffusion model and propose a novel Graph Signal Diffusion Model for Collaborative Filtering (named GiffCF). To better represent the correlated distribution of user-item interactions, we define a generalized diffusion process using heat equation on the item-item similarity graph. Our forward process smooths interaction signals with an advanced family of graph filters, introducing the graph adjacency as beneficial prior knowledge for recommendation. Our reverse process iteratively refines and sharpens latent signals in a noise-free manner, where the updates are conditioned on the user's history and computed from a carefully designed two-stage denoiser, leading to high-quality reconstruction. Finally, through extensive experiments, we show that GiffCF effectively leverages the advantages of both diffusion model and graph signal processing, and achieves state-of-the-art performance on three benchmark datasets." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,jinri2024content,\cite{jinri2024content},Content-based graph reconstruction for cold-start item recommendation,,,True,False,"Kim, Jinri and Kim, Eungi and Yeo, Kwangeun and Jeon, Yujin and Kim, Chanwoo and Lee, Sewon and Lee, Joonseok",2024.0,,,,,Content-based graph reconstruction for cold-start item recommendation,Content-based Graph Reconstruction for Cold-start Item ...,https://dl.acm.org/doi/10.1145/3626772.3657801,"Content-based Graph Reconstruction for Cold-start Item Recommendation | Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval * Jiang L Sun X Liu L Yao J Shi L Han Z Wang H(2025)An intelligent fusion recommendation model based on attention trees and graph convolutional networks in social Media Information Sciences 10.1016/j.ins.2025.122191**714**(122191)Online publication date: Oct-2025https://doi.org/10.1016/j.ins.2025.122191 * ### Aligning Distillation For Cold-start Item RecommendationSIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval Recommending cold items in recommendation systems is a longstanding challenge due to the inherent differences between warm items, which are recommended based on user behavior, and cold items, which are recommended based on content features." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,eungi2025reducedgcn,\cite{eungi2025reducedgcn},Reduced{GCN}: Learning to Adapt Graph Convolution for Top-N Recommendation,,,True,False,"Kim, Eungi and Kim, Chanwoo and Yeo, Kwangeun and Kim, Jinri and Jeon, Yujin and Lee, Sewon and Lee, Joonseok",2025.0,,,,,Reduced{GCN}: Learning to Adapt Graph Convolution for Top-N Recommendation,ReducedGCN: Learning to Adapt Graph Convolution for Top ...,https://s-space.snu.ac.kr/handle/10371/210198?mode=full,"ReducedGCN은 다양한 GCN 기반 모델에 적용 가능하며, 세 가지 벤치마크 데이터셋에서 수행한 실험 결과, 현재의 최고 수준 모델보다 우수한 성능을 확인" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,wuSelfsupervisedGraphLearning2021,\cite{wuSelfsupervisedGraphLearning2021},Contrastive Self-supervised Learning for Graph Classification,http://arxiv.org/abs/2009.05923v1,"Graph classification is a widely studied problem and has broad applications. In many real-world problems, the number of labeled graphs available for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting. In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels, then finetune the pretrained encoders on labeled graphs. In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously. To perform CSSL on graphs, given a collection of original graphs, we perform data augmentation to create augmented graphs out of the original graphs. An augmented graph is created by consecutively applying a sequence of graph alteration operations. A contrastive loss is defined to learn graph encoders by judging whether two augmented graphs are from the same original graph. Experiments on various graph classification datasets demonstrate the effectiveness of our proposed methods.",True,True,"Wu, Jiancan and Wang, Xiang and Feng, Fuli and He, Xiangnan and Chen, Liang and Lian, Jianxun and Xie, Xing",2021.0,,,,,Contrastive Self-supervised Learning for Graph Classification,Contrastive Self-supervised Learning for Graph Classification,https://ojs.aaai.org/index.php/AAAI/article/view/17293/17100,"by J Zeng · 2021 · Cited by 187 — To alleviate overfitting in graph classification, we propose two methods based on contrastive self-supervised learning (CSSL): CSSL-Pretrain and CSSL-Reg. In" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,xiaHypergraphContrastiveCollaborative2022,\cite{xiaHypergraphContrastiveCollaborative2022},Hypergraph Contrastive Collaborative Filtering,http://arxiv.org/abs/2204.12200v2,"Collaborative Filtering (CF) has emerged as fundamental paradigms for parameterizing users and items into latent representation space, with their correlative patterns from interaction data. Among various CF techniques, the development of GNN-based recommender systems, e.g., PinSage and LightGCN, has offered the state-of-the-art performance. However, two key challenges have not been well explored in existing solutions: i) The over-smoothing effect with deeper graph-based CF architecture, may cause the indistinguishable user representations and degradation of recommendation results. ii) The supervision signals (i.e., user-item interactions) are usually scarce and skewed distributed in reality, which limits the representation power of CF paradigms. To tackle these challenges, we propose a new self-supervised recommendation framework Hypergraph Contrastive Collaborative Filtering (HCCF) to jointly capture local and global collaborative relations with a hypergraph-enhanced cross-view contrastive learning architecture. In particular, the designed hypergraph structure learning enhances the discrimination ability of GNN-based CF paradigm, so as to comprehensively capture the complex high-order dependencies among users. Additionally, our HCCF model effectively integrates the hypergraph structure encoding with self-supervised learning to reinforce the representation quality of recommender systems, based on the hypergraph-enhanced self-discrimination. Extensive experiments on three benchmark datasets demonstrate the superiority of our model over various state-of-the-art recommendation methods, and the robustness against sparse user interaction data. Our model implementation codes are available at https://github.com/akaxlh/HCCF.",True,True,"Xia, Lianghao and Huang, Chao and Xu, Yong and Zhao, Jiashu and Yin, Dawei and Huang, Jimmy",2022.0,,,,,Hypergraph Contrastive Collaborative Filtering,Hypergraph Contrastive Collaborative Filtering,http://arxiv.org/pdf/2204.12200v2,"Collaborative Filtering (CF) has emerged as fundamental paradigms for parameterizing users and items into latent representation space, with their correlative patterns from interaction data. Among various CF techniques, the development of GNN-based recommender systems, e.g., PinSage and LightGCN, has offered the state-of-the-art performance. However, two key challenges have not been well explored in existing solutions: i) The over-smoothing effect with deeper graph-based CF architecture, may cause the indistinguishable user representations and degradation of recommendation results. ii) The supervision signals (i.e., user-item interactions) are usually scarce and skewed distributed in reality, which limits the representation power of CF paradigms. To tackle these challenges, we propose a new self-supervised recommendation framework Hypergraph Contrastive Collaborative Filtering (HCCF) to jointly capture local and global collaborative relations with a hypergraph-enhanced cross-view contrastive learning architecture. In particular, the designed hypergraph structure learning enhances the discrimination ability of GNN-based CF paradigm, so as to comprehensively capture the complex high-order dependencies among users. Additionally, our HCCF model effectively integrates the hypergraph structure encoding with self-supervised learning to reinforce the representation quality of recommender systems, based on the hypergraph-enhanced self-discrimination. Extensive experiments on three benchmark datasets demonstrate the superiority of our model over various state-of-the-art recommendation methods, and the robustness against sparse user interaction data. Our model implementation codes are available at https://github.com/akaxlh/HCCF." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,linImprovingGraphCollaborative2022,\cite{linImprovingGraphCollaborative2022},"Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning",http://arxiv.org/abs/2202.06200v2,"Recently, graph collaborative filtering methods have been proposed as an effective recommendation approach, which can capture users' preference over items by modeling the user-item interaction graphs. In order to reduce the influence of data sparsity, contrastive learning is adopted in graph collaborative filtering for enhancing the performance. However, these methods typically construct the contrastive pairs by random sampling, which neglect the neighboring relations among users (or items) and fail to fully exploit the potential of contrastive learning for recommendation. To tackle the above issue, we propose a novel contrastive learning approach, named Neighborhood-enriched Contrastive Learning, named NCL, which explicitly incorporates the potential neighbors into contrastive pairs. Specifically, we introduce the neighbors of a user (or an item) from graph structure and semantic space respectively. For the structural neighbors on the interaction graph, we develop a novel structure-contrastive objective that regards users (or items) and their structural neighbors as positive contrastive pairs. In implementation, the representations of users (or items) and neighbors correspond to the outputs of different GNN layers. Furthermore, to excavate the potential neighbor relation in semantic space, we assume that users with similar representations are within the semantic neighborhood, and incorporate these semantic neighbors into the prototype-contrastive objective. The proposed NCL can be optimized with EM algorithm and generalized to apply to graph collaborative filtering methods. Extensive experiments on five public datasets demonstrate the effectiveness of the proposed NCL, notably with 26% and 17% performance gain over a competitive graph collaborative filtering base model on the Yelp and Amazon-book datasets respectively. Our code is available at: https://github.com/RUCAIBox/NCL.",True,True,"Lin, Zihan and Tian, Changxin and Hou, Yupeng and Zhao, Wayne Xin",2022.0,,,,,"Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning",Improving Graph Collaborative Filtering with Neighborhood ...,https://dl.acm.org/doi/10.1145/3485447.3512104,"We propose a novel contrastive learning approach, named Neighborhood-enriched Contrastive Learning, named NCL, which explicitly incorporates the potential" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,jiangAdaptiveGraphContrastive2023,\cite{jiangAdaptiveGraphContrastive2023},Adaptive Graph Contrastive Learning for Recommendation,http://arxiv.org/abs/2305.10837v3,"Graph neural networks (GNNs) have recently emerged as an effective collaborative filtering (CF) approaches for recommender systems. The key idea of GNN-based recommender systems is to recursively perform message passing along user-item interaction edges to refine encoded embeddings, relying on sufficient and high-quality training data. However, user behavior data in practical recommendation scenarios is often noisy and exhibits skewed distribution. To address these issues, some recommendation approaches, such as SGL, leverage self-supervised learning to improve user representations. These approaches conduct self-supervised learning through creating contrastive views, but they depend on the tedious trial-and-error selection of augmentation methods. In this paper, we propose a novel Adaptive Graph Contrastive Learning (AdaGCL) framework that conducts data augmentation with two adaptive contrastive view generators to better empower the CF paradigm. Specifically, we use two trainable view generators - a graph generative model and a graph denoising model - to create adaptive contrastive views. With two adaptive contrastive views, AdaGCL introduces additional high-quality training signals into the CF paradigm, helping to alleviate data sparsity and noise issues. Extensive experiments on three real-world datasets demonstrate the superiority of our model over various state-of-the-art recommendation methods. Our model implementation codes are available at the link https://github.com/HKUDS/AdaGCL.",True,True,"Jiang, Yangqin and Huang, Chao and Huang, Lianghao",2023.0,,,,,Adaptive Graph Contrastive Learning for Recommendation,Adaptive Graph Contrastive Learning for Recommendation,http://arxiv.org/pdf/2305.10837v3,"Graph neural networks (GNNs) have recently emerged as an effective collaborative filtering (CF) approaches for recommender systems. The key idea of GNN-based recommender systems is to recursively perform message passing along user-item interaction edges to refine encoded embeddings, relying on sufficient and high-quality training data. However, user behavior data in practical recommendation scenarios is often noisy and exhibits skewed distribution. To address these issues, some recommendation approaches, such as SGL, leverage self-supervised learning to improve user representations. These approaches conduct self-supervised learning through creating contrastive views, but they depend on the tedious trial-and-error selection of augmentation methods. In this paper, we propose a novel Adaptive Graph Contrastive Learning (AdaGCL) framework that conducts data augmentation with two adaptive contrastive view generators to better empower the CF paradigm. Specifically, we use two trainable view generators - a graph generative model and a graph denoising model - to create adaptive contrastive views. With two adaptive contrastive views, AdaGCL introduces additional high-quality training signals into the CF paradigm, helping to alleviate data sparsity and noise issues. Extensive experiments on three real-world datasets demonstrate the superiority of our model over various state-of-the-art recommendation methods. Our model implementation codes are available at the link https://github.com/HKUDS/AdaGCL." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,zhengSpectralCollaborativeFiltering2018,\cite{zhengSpectralCollaborativeFiltering2018},Spectral Collaborative Filtering,http://arxiv.org/abs/1808.10523v1,"Despite the popularity of Collaborative Filtering (CF), CF-based methods are haunted by the \textit{cold-start} problem, which has a significantly negative impact on users' experiences with Recommender Systems (RS). In this paper, to overcome the aforementioned drawback, we first formulate the relationships between users and items as a bipartite graph. Then, we propose a new spectral convolution operation directly performing in the \textit{spectral domain}, where not only the proximity information of a graph but also the connectivity information hidden in the graph are revealed. With the proposed spectral convolution operation, we build a deep recommendation model called Spectral Collaborative Filtering (SpectralCF). Benefiting from the rich information of connectivity existing in the \textit{spectral domain}, SpectralCF is capable of discovering deep connections between users and items and therefore, alleviates the \textit{cold-start} problem for CF. To the best of our knowledge, SpectralCF is the first CF-based method directly learning from the \textit{spectral domains} of user-item bipartite graphs. We apply our method on several standard datasets. It is shown that SpectralCF significantly outperforms state-of-the-art models. Code and data are available at \url{https://github.com/lzheng21/SpectralCF}.",True,True,"Zheng, Lei and Lu, Chun-Ta and Jiang, Fei and Zhang, Jiawei and Yu, Philip S",2018.0,,,,,Spectral Collaborative Filtering,Spectral Collaborative Filtering,http://arxiv.org/pdf/1808.10523v1,"Despite the popularity of Collaborative Filtering (CF), CF-based methods are haunted by the \textit{cold-start} problem, which has a significantly negative impact on users' experiences with Recommender Systems (RS). In this paper, to overcome the aforementioned drawback, we first formulate the relationships between users and items as a bipartite graph. Then, we propose a new spectral convolution operation directly performing in the \textit{spectral domain}, where not only the proximity information of a graph but also the connectivity information hidden in the graph are revealed. With the proposed spectral convolution operation, we build a deep recommendation model called Spectral Collaborative Filtering (SpectralCF). Benefiting from the rich information of connectivity existing in the \textit{spectral domain}, SpectralCF is capable of discovering deep connections between users and items and therefore, alleviates the \textit{cold-start} problem for CF. To the best of our knowledge, SpectralCF is the first CF-based method directly learning from the \textit{spectral domains} of user-item bipartite graphs. We apply our method on several standard datasets. It is shown that SpectralCF significantly outperforms state-of-the-art models. Code and data are available at \url{https://github.com/lzheng21/SpectralCF}." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,shenHowPowerfulGraph2021,\cite{shenHowPowerfulGraph2021},How Powerful is Graph Convolution for Recommendation?,http://arxiv.org/abs/2108.07567v1,"Graph convolutional networks (GCNs) have recently enabled a popular class of algorithms for collaborative filtering (CF). Nevertheless, the theoretical underpinnings of their empirical successes remain elusive. In this paper, we endeavor to obtain a better understanding of GCN-based CF methods via the lens of graph signal processing. By identifying the critical role of smoothness, a key concept in graph signal processing, we develop a unified graph convolution-based framework for CF. We prove that many existing CF methods are special cases of this framework, including the neighborhood-based methods, low-rank matrix factorization, linear auto-encoders, and LightGCN, corresponding to different low-pass filters. Based on our framework, we then present a simple and computationally efficient CF baseline, which we shall refer to as Graph Filter based Collaborative Filtering (GF-CF). Given an implicit feedback matrix, GF-CF can be obtained in a closed form instead of expensive training with back-propagation. Experiments will show that GF-CF achieves competitive or better performance against deep learning-based methods on three well-known datasets, notably with a $70\%$ performance gain over LightGCN on the Amazon-book dataset.",True,True,"Shen, Yifei and Wu, Yongji and Zhang, Yao and Shan, Caihua and Zhang, Jun and Letaief, B. Khaled and Li, Dongsheng",2021.0,,,,,How Powerful is Graph Convolution for Recommendation?,How Powerful is Graph Convolution for Recommendation?,http://arxiv.org/pdf/2108.07567v1,"Graph convolutional networks (GCNs) have recently enabled a popular class of algorithms for collaborative filtering (CF). Nevertheless, the theoretical underpinnings of their empirical successes remain elusive. In this paper, we endeavor to obtain a better understanding of GCN-based CF methods via the lens of graph signal processing. By identifying the critical role of smoothness, a key concept in graph signal processing, we develop a unified graph convolution-based framework for CF. We prove that many existing CF methods are special cases of this framework, including the neighborhood-based methods, low-rank matrix factorization, linear auto-encoders, and LightGCN, corresponding to different low-pass filters. Based on our framework, we then present a simple and computationally efficient CF baseline, which we shall refer to as Graph Filter based Collaborative Filtering (GF-CF). Given an implicit feedback matrix, GF-CF can be obtained in a closed form instead of expensive training with back-propagation. Experiments will show that GF-CF achieves competitive or better performance against deep learning-based methods on three well-known datasets, notably with a $70\%$ performance gain over LightGCN on the Amazon-book dataset." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,fuRevisitingNeighborhoodbasedLink2022,\cite{fuRevisitingNeighborhoodbasedLink2022},"Revisiting Neighborhood-based Link Prediction for Collaborative Filtering",http://arxiv.org/abs/2203.15789v1,"Collaborative filtering (CF) is one of the most successful and fundamental techniques in recommendation systems. In recent years, Graph Neural Network (GNN)-based CF models, such as NGCF [31], LightGCN [10] and GTN [9] have achieved tremendous success and significantly advanced the state-of-the-art. While there is a rich literature of such works using advanced models for learning user and item representations separately, item recommendation is essentially a link prediction problem between users and items. Furthermore, while there have been early works employing link prediction for collaborative filtering [5, 6], this trend has largely given way to works focused on aggregating information from user and item nodes, rather than modeling links directly. In this paper, we propose a new linkage (connectivity) score for bipartite graphs, generalizing multiple standard link prediction methods. We combine this new score with an iterative degree update process in the user-item interaction bipartite graph to exploit local graph structures without any node modeling. The result is a simple, non-deep learning model with only six learnable parameters. Despite its simplicity, we demonstrate our approach significantly outperforms existing state-of-the-art GNN-based CF approaches on four widely used benchmarks. In particular, on Amazon-Book, we demonstrate an over 60% improvement for both Recall and NDCG. We hope our work would invite the community to revisit the link prediction aspect of collaborative filtering, where significant performance gains could be achieved through aligning link prediction with item recommendations.",True,True,"Fu, Hao-Ming and Poirson, Patrick and Lee, Kwot Sin and Wang, Chen",2022.0,,,,,"Revisiting Neighborhood-based Link Prediction for Collaborative Filtering",Revisiting Neighborhood-based Link Prediction for ...,https://dl.acm.org/doi/10.1145/3487553.3524712,"We hope our work would invite the community to revisit the link prediction aspect of collaborative filtering, where significant performance gains could be" Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,liuPersonalizedGraphSignal2023,\cite{liuPersonalizedGraphSignal2023},Personalized Graph Signal Processing for Collaborative Filtering,http://arxiv.org/abs/2302.02113v1,"The collaborative filtering (CF) problem with only user-item interaction information can be solved by graph signal processing (GSP), which uses low-pass filters to smooth the observed interaction signals on the similarity graph to obtain the prediction signals. However, the interaction signal may not be sufficient to accurately characterize user interests and the low-pass filters may ignore the useful information contained in the high-frequency component of the observed signals, resulting in suboptimal accuracy. To this end, we propose a personalized graph signal processing (PGSP) method for collaborative filtering. Firstly, we design the personalized graph signal containing richer user information and construct an augmented similarity graph containing more graph topology information, to more effectively characterize user interests. Secondly, we devise a mixed-frequency graph filter to introduce useful information in the high-frequency components of the observed signals by combining an ideal low-pass filter that smooths signals globally and a linear low-pass filter that smooths signals locally. Finally, we combine the personalized graph signal, the augmented similarity graph and the mixed-frequency graph filter by proposing a pipeline consisting of three key steps: pre-processing, graph convolution and post-processing. Extensive experiments show that PGSP can achieve superior accuracy compared with state-of-the-art CF methods and, as a nonparametric method, PGSP has very high training efficiency.",True,True,"Liu, Jiahao and Li, Dongsheng and Gu, Hansu and Lu, Tun and Zhang, Peng and Shang, Li and Gu, Ning",2023.0,,,,,Personalized Graph Signal Processing for Collaborative Filtering,Personalized Graph Signal Processing for Collaborative Filtering,http://arxiv.org/pdf/2302.02113v1,"The collaborative filtering (CF) problem with only user-item interaction information can be solved by graph signal processing (GSP), which uses low-pass filters to smooth the observed interaction signals on the similarity graph to obtain the prediction signals. However, the interaction signal may not be sufficient to accurately characterize user interests and the low-pass filters may ignore the useful information contained in the high-frequency component of the observed signals, resulting in suboptimal accuracy. To this end, we propose a personalized graph signal processing (PGSP) method for collaborative filtering. Firstly, we design the personalized graph signal containing richer user information and construct an augmented similarity graph containing more graph topology information, to more effectively characterize user interests. Secondly, we devise a mixed-frequency graph filter to introduce useful information in the high-frequency components of the observed signals by combining an ideal low-pass filter that smooths signals globally and a linear low-pass filter that smooths signals locally. Finally, we combine the personalized graph signal, the augmented similarity graph and the mixed-frequency graph filter by proposing a pipeline consisting of three key steps: pre-processing, graph convolution and post-processing. Extensive experiments show that PGSP can achieve superior accuracy compared with state-of-the-art CF methods and, as a nonparametric method, PGSP has very high training efficiency." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,pengSGFCF2024,\cite{pengSGFCF2024},How Powerful is Graph Filtering for Recommendation,http://arxiv.org/abs/2406.08827v1,"It has been shown that the effectiveness of graph convolutional network (GCN) for recommendation is attributed to the spectral graph filtering. Most GCN-based methods consist of a graph filter or followed by a low-rank mapping optimized based on supervised training. However, we show two limitations suppressing the power of graph filtering: (1) Lack of generality. Due to the varied noise distribution, graph filters fail to denoise sparse data where noise is scattered across all frequencies, while supervised training results in worse performance on dense data where noise is concentrated in middle frequencies that can be removed by graph filters without training. (2) Lack of expressive power. We theoretically show that linear GCN (LGCN) that is effective on collaborative filtering (CF) cannot generate arbitrary embeddings, implying the possibility that optimal data representation might be unreachable. To tackle the first limitation, we show close relation between noise distribution and the sharpness of spectrum where a sharper spectral distribution is more desirable causing data noise to be separable from important features without training. Based on this observation, we propose a generalized graph normalization G^2N to adjust the sharpness of spectral distribution in order to redistribute data noise to assure that it can be removed by graph filtering without training. As for the second limitation, we propose an individualized graph filter (IGF) adapting to the different confidence levels of the user preference that interactions can reflect, which is proved to be able to generate arbitrary embeddings. By simplifying LGCN, we further propose a simplified graph filtering (SGFCF) which only requires the top-K singular values for recommendation. Finally, experimental results on four datasets with different density settings demonstrate the effectiveness and efficiency of our proposed methods.",True,True,"Peng, Shaowen and Liu, Xin and Sugiyama, Kazunari and Mine, Tsunenori",2024.0,,,,,How Powerful is Graph Filtering for Recommendation,How Powerful is Graph Filtering for Recommendation,http://arxiv.org/pdf/2406.08827v1,"It has been shown that the effectiveness of graph convolutional network (GCN) for recommendation is attributed to the spectral graph filtering. Most GCN-based methods consist of a graph filter or followed by a low-rank mapping optimized based on supervised training. However, we show two limitations suppressing the power of graph filtering: (1) Lack of generality. Due to the varied noise distribution, graph filters fail to denoise sparse data where noise is scattered across all frequencies, while supervised training results in worse performance on dense data where noise is concentrated in middle frequencies that can be removed by graph filters without training. (2) Lack of expressive power. We theoretically show that linear GCN (LGCN) that is effective on collaborative filtering (CF) cannot generate arbitrary embeddings, implying the possibility that optimal data representation might be unreachable. To tackle the first limitation, we show close relation between noise distribution and the sharpness of spectrum where a sharper spectral distribution is more desirable causing data noise to be separable from important features without training. Based on this observation, we propose a generalized graph normalization G^2N to adjust the sharpness of spectral distribution in order to redistribute data noise to assure that it can be removed by graph filtering without training. As for the second limitation, we propose an individualized graph filter (IGF) adapting to the different confidence levels of the user preference that interactions can reflect, which is proved to be able to generate arbitrary embeddings. By simplifying LGCN, we further propose a simplified graph filtering (SGFCF) which only requires the top-K singular values for recommendation. Finally, experimental results on four datasets with different density settings demonstrate the effectiveness and efficiency of our proposed methods." Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,2505.00552v1,park2024turbo,\cite{park2024turbo},Turbo-{CF}: Matrix decomposition-free graph filtering for fast recommendation,,,True,False,"Park, Jin-Duk and Shin, Yong-Min and Shin, Won-Yong",2024.0,,,,,Turbo-{CF}: Matrix decomposition-free graph filtering for fast recommendation,Turbo-CF: Matrix Decomposition-Free Graph Filtering for ...,https://jordan7186.github.io/assets/pdf/turbocf.pdf,"Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast Recommendation Jin-Duk Park Yonsei University Seoul, Republic of Korea jindeok6@yonsei.ac.kr Yong-Min Shin Yonsei University Seoul, Republic of Korea jordan3414@yonsei.ac.kr Won-Yong Shin∗ Yonsei University Seoul, Republic of Korea wy.shin@yonsei.ac.kr ABSTRACT A series of graph filtering (GF)-based collaborative filtering (CF) showcases state-of-the-art performance on the recommendation accuracy by using a low-pass filter (LPF) without a training pro-cess. Interestingly, Theorem 3.1 implies that we can design arbitrary LPFs by deciding proper coefficients of polynomials, depending on the Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast Recommendation SIGIR ’24, July 14–18, 2024, Washington D.C, USA Figure 2: The schematic overview of Turbo-CF. Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast Recommendation SIGIR ’24, July 14–18, 2024, Washington D.C, USA REFERENCES [1] Jianxin Chang, Chen Gao, Yu Zheng, Yiqun Hui, Yanan Niu, Yang Song, Depeng Jin, and Yong Li. 2021." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,lei2020estimation,\cite{lei2020estimation},Estimation-action-reflection: Towards deep interaction between conversational and recommender systems,,,True,False,"Lei, Wenqiang and He, Xiangnan and Miao, Yisong and Wu, Qingyun and Hong, Richang and Kan, Min-Yen and Chua, Tat-Seng",2020.0,,,,,Estimation-action-reflection: Towards deep interaction between conversational and recommender systems,Estimation–Action–Reflection: Towards deep interaction between ...,https://pure.psu.edu/en/publications/estimationactionreflection-towards-deep-interaction-between-conve,"Recommender systems are embracing conversational technologies to obtain user preferences dynamically, and to overcome inherent limitations of their static" "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,lei2020interactive,\cite{lei2020interactive},Interactive Path Reasoning on Graph for Conversational Recommendation,http://arxiv.org/abs/2007.00194v1,"Traditional recommendation systems estimate user preference on items from past interaction history, thus suffering from the limitations of obtaining fine-grained and dynamic user preference. Conversational recommendation system (CRS) brings revolutions to those limitations by enabling the system to directly ask users about their preferred attributes on items. However, existing CRS methods do not make full use of such advantage -- they only use the attribute feedback in rather implicit ways such as updating the latent user representation. In this paper, we propose Conversational Path Reasoning (CPR), a generic framework that models conversational recommendation as an interactive path reasoning problem on a graph. It walks through the attribute vertices by following user feedback, utilizing the user preferred attributes in an explicit way. By leveraging on the graph structure, CPR is able to prune off many irrelevant candidate attributes, leading to better chance of hitting user preferred attributes. To demonstrate how CPR works, we propose a simple yet effective instantiation named SCPR (Simple CPR). We perform empirical studies on the multi-round conversational recommendation scenario, the most realistic CRS setting so far that considers multiple rounds of asking attributes and recommending items. Through extensive experiments on two datasets Yelp and LastFM, we validate the effectiveness of our SCPR, which significantly outperforms the state-of-the-art CRS methods EAR (arXiv:2002.09102) and CRM (arXiv:1806.03277). In particular, we find that the more attributes there are, the more advantages our method can achieve.",True,True,"Lei, Wenqiang and Zhang, Gangyi and He, Xiangnan and Miao, Yisong and Wang, Xiang and Chen, Liang and Chua, Tat-Seng",2020.0,,,,,Interactive Path Reasoning on Graph for Conversational Recommendation,Interactive Path Reasoning on Graph for Conversational Recommendation,http://arxiv.org/pdf/2007.00194v1,"Traditional recommendation systems estimate user preference on items from past interaction history, thus suffering from the limitations of obtaining fine-grained and dynamic user preference. Conversational recommendation system (CRS) brings revolutions to those limitations by enabling the system to directly ask users about their preferred attributes on items. However, existing CRS methods do not make full use of such advantage -- they only use the attribute feedback in rather implicit ways such as updating the latent user representation. In this paper, we propose Conversational Path Reasoning (CPR), a generic framework that models conversational recommendation as an interactive path reasoning problem on a graph. It walks through the attribute vertices by following user feedback, utilizing the user preferred attributes in an explicit way. By leveraging on the graph structure, CPR is able to prune off many irrelevant candidate attributes, leading to better chance of hitting user preferred attributes. To demonstrate how CPR works, we propose a simple yet effective instantiation named SCPR (Simple CPR). We perform empirical studies on the multi-round conversational recommendation scenario, the most realistic CRS setting so far that considers multiple rounds of asking attributes and recommending items. Through extensive experiments on two datasets Yelp and LastFM, we validate the effectiveness of our SCPR, which significantly outperforms the state-of-the-art CRS methods EAR (arXiv:2002.09102) and CRM (arXiv:1806.03277). In particular, we find that the more attributes there are, the more advantages our method can achieve." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,li2021seamlessly,\cite{li2021seamlessly},"Seamlessly Unifying Attributes and Items: Conversational Recommendation for Cold-Start Users",http://arxiv.org/abs/2005.12979v5,"Static recommendation methods like collaborative filtering suffer from the inherent limitation of performing real-time personalization for cold-start users. Online recommendation, e.g., multi-armed bandit approach, addresses this limitation by interactively exploring user preference online and pursuing the exploration-exploitation (EE) trade-off. However, existing bandit-based methods model recommendation actions homogeneously. Specifically, they only consider the items as the arms, being incapable of handling the item attributes, which naturally provide interpretable information of user's current demands and can effectively filter out undesired items. In this work, we consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user interactively. This important scenario was studied in a recent work. However, it employs a hand-crafted function to decide when to ask attributes or make recommendations. Such separate modeling of attributes and items makes the effectiveness of the system highly rely on the choice of the hand-crafted function, thus introducing fragility to the system. To address this limitation, we seamlessly unify attributes and items in the same arm space and achieve their EE trade-offs automatically using the framework of Thompson Sampling. Our Conversational Thompson Sampling (ConTS) model holistically solves all questions in conversational recommendation by choosing the arm with the maximal reward to play. Extensive experiments on three benchmark datasets show that ConTS outperforms the state-of-the-art methods Conversational UCB (ConUCB) and Estimation-Action-Reflection model in both metrics of success rate and average number of conversation turns.",True,True,"Li, Shijun and Lei, Wenqiang and Wu, Qingyun and He, Xiangnan and Jiang, Peng and Chua, Tat-Seng",2021.0,,,,ACM Transactions on Information Systems (TOIS),"Seamlessly Unifying Attributes and Items: Conversational Recommendation for Cold-Start Users",Conversational Recommendation for Cold-start Users,https://dl.acm.org/doi/10.1145/3446427,"In this work, we consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user" "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,wang2022towards,\cite{wang2022towards},Towards unified conversational recommender systems via knowledge-enhanced prompt learning,,,True,False,"Wang, Xiaolei and Zhou, Kun and Wen, Ji-Rong and Zhao, Wayne Xin",2022.0,,,,,Towards unified conversational recommender systems via knowledge-enhanced prompt learning,Improving conversational recommender systems via multi ... - Bohrium,https://www.bohrium.com/paper-details/improving-conversational-recommender-systems-via-multi-preference-modeling-and-knowledge-enhanced/952757365894021141-2446,[3] Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning. Conversational recommender systems (CRS) aim to proactively "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,wang2023improving,\cite{wang2023improving},Improving conversational recommendation systems via counterfactual data simulation,,,True,False,"Wang, Xiaolei and Zhou, Kun and Tang, Xinyu and Zhao, Wayne Xin and Pan, Fan and Cao, Zhao and Wen, Ji-Rong",2023.0,,,,,Improving conversational recommendation systems via counterfactual data simulation,Improving Conversational Recommendation Systems via Counterfactual Data Simulation,http://arxiv.org/pdf/2306.02842v1,"Conversational recommender systems (CRSs) aim to provide recommendation services via natural language conversations. Although a number of approaches have been proposed for developing capable CRSs, they typically rely on sufficient training data for training. Since it is difficult to annotate recommendation-oriented dialogue datasets, existing CRS approaches often suffer from the issue of insufficient training due to the scarcity of training data. To address this issue, in this paper, we propose a CounterFactual data simulation approach for CRS, named CFCRS, to alleviate the issue of data scarcity in CRSs. Our approach is developed based on the framework of counterfactual data augmentation, which gradually incorporates the rewriting to the user preference from a real dialogue without interfering with the entire conversation flow. To develop our approach, we characterize user preference and organize the conversation flow by the entities involved in the dialogue, and design a multi-stage recommendation dialogue simulator based on a conversation flow language model. Under the guidance of the learned user preference and dialogue schema, the flow language model can produce reasonable, coherent conversation flows, which can be further realized into complete dialogues. Based on the simulator, we perform the intervention at the representations of the interacted entities of target users, and design an adversarial training method with a curriculum schedule that can gradually optimize the data augmentation strategy. Extensive experiments show that our approach can consistently boost the performance of several competitive CRSs, and outperform other data augmentation methods, especially when the training data is limited. Our code is publicly available at https://github.com/RUCAIBox/CFCRS." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,zhao2023alleviating,\cite{zhao2023alleviating},Alleviating the Long-Tail Problem in Conversational Recommender Systems,http://arxiv.org/abs/2307.11650v1,"Conversational recommender systems (CRS) aim to provide the recommendation service via natural language conversations. To develop an effective CRS, high-quality CRS datasets are very crucial. However, existing CRS datasets suffer from the long-tail issue, \ie a large proportion of items are rarely (or even never) mentioned in the conversations, which are called long-tail items. As a result, the CRSs trained on these datasets tend to recommend frequent items, and the diversity of the recommended items would be largely reduced, making users easier to get bored. To address this issue, this paper presents \textbf{LOT-CRS}, a novel framework that focuses on simulating and utilizing a balanced CRS dataset (\ie covering all the items evenly) for improving \textbf{LO}ng-\textbf{T}ail recommendation performance of CRSs. In our approach, we design two pre-training tasks to enhance the understanding of simulated conversation for long-tail items, and adopt retrieval-augmented fine-tuning with label smoothness strategy to further improve the recommendation of long-tail items. Extensive experiments on two public CRS datasets have demonstrated the effectiveness and extensibility of our approach, especially on long-tail recommendation.",True,True,"Zhao, Zhipeng and Zhou, Kun and Wang, Xiaolei and Zhao, Wayne Xin and Pan, Fan and Cao, Zhao and Wen, Ji-Rong",2023.0,,,,,Alleviating the Long-Tail Problem in Conversational Recommender Systems,Alleviating the Long-Tail Problem in Conversational Recommender ...,https://dl.acm.org/doi/fullHtml/10.1145/3604915.3608812,"To reduce the influence of the long-tail problem, a commonly used way is the resampling method that adds redundant examples about the long-tail items to balance" "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,dao2024broadening,\cite{dao2024broadening},Broadening the view: Demonstration-augmented prompt learning for conversational recommendation,,,True,False,"Dao, Huy and Deng, Yang and Le, Dung D and Liao, Lizi",2024.0,,,,,Broadening the view: Demonstration-augmented prompt learning for conversational recommendation,Broadening the View: Demonstration-augmented Prompt Learning ...,https://dl.acm.org/doi/10.1145/3626772.3657755,"We introduce a novel Demonstration-enhanced Conversational Recommender System (DCRS), which aims to strengthen its understanding on the given dialogue contexts." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,he2023large,\cite{he2023large},Large language models as zero-shot conversational recommenders,,,True,False,"He, Zhankui and Xie, Zhouhang and Jha, Rahul and Steck, Harald and Liang, Dawen and Feng, Yesu and Majumder, Bodhisattwa Prasad and Kallus, Nathan and McAuley, Julian",2023.0,,,,,Large language models as zero-shot conversational recommenders,Large Language Models as Zero-Shot Conversational ...,https://arxiv.org/abs/2308.10053,"by Z He · 2023 · Cited by 233 — In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,yang2024unleashing,\cite{yang2024unleashing},Unleashing the Retrieval Potential of Large Language Models in Conversational Recommender Systems,,,True,False,"Yang, Ting and Chen, Li",2024.0,,,,,Unleashing the Retrieval Potential of Large Language Models in Conversational Recommender Systems,Unleashing the Retrieval Potential of Large Language Models in ...,https://www.researchgate.net/publication/384748558_Unleashing_the_Retrieval_Potential_of_Large_Language_Models_in_Conversational_Recommender_Systems,"Conversational Recommender Systems. CRS enables users to receive recommendations through interactive conversations, often with mixed initiative where both user" "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,xie2024neighborhood,\cite{xie2024neighborhood},Neighborhood-Based Collaborative Filtering for Conversational Recommendation,,,True,False,"Xie, Zhouhang and Wu, Junda and Jeon, Hyunsik and He, Zhankui and Steck, Harald and Jha, Rahul and Liang, Dawen and Kallus, Nathan and McAuley, Julian",2024.0,,,,,Neighborhood-Based Collaborative Filtering for Conversational Recommendation,Neighborhood-Based Collaborative Filtering for Conversational ...,https://dl.acm.org/doi/10.1145/3640457.3688191,We define a class of neighborhood-based CRS that makes recommendations by identifying items commonly associated with similar training dialogue contexts. "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,cobbe2021training,\cite{cobbe2021training},Training Verifiers to Solve Math Word Problems,http://arxiv.org/abs/2110.14168v2,"State-of-the-art language models can match human performance on many tasks, but they still struggle to robustly perform multi-step mathematical reasoning. To diagnose the failures of current models and support research, we introduce GSM8K, a dataset of 8.5K high quality linguistically diverse grade school math word problems. We find that even the largest transformer models fail to achieve high test performance, despite the conceptual simplicity of this problem distribution. To increase performance, we propose training verifiers to judge the correctness of model completions. At test time, we generate many candidate solutions and select the one ranked highest by the verifier. We demonstrate that verification significantly improves performance on GSM8K, and we provide strong empirical evidence that verification scales more effectively with increased data than a finetuning baseline.",True,True,"Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and others",2021.0,,,,arXiv preprint arXiv:2110.14168,Training Verifiers to Solve Math Word Problems,Training Verifiers to Solve Math Word Problems,http://arxiv.org/pdf/2110.14168v2,"State-of-the-art language models can match human performance on many tasks, but they still struggle to robustly perform multi-step mathematical reasoning. To diagnose the failures of current models and support research, we introduce GSM8K, a dataset of 8.5K high quality linguistically diverse grade school math word problems. We find that even the largest transformer models fail to achieve high test performance, despite the conceptual simplicity of this problem distribution. To increase performance, we propose training verifiers to judge the correctness of model completions. At test time, we generate many candidate solutions and select the one ranked highest by the verifier. We demonstrate that verification significantly improves performance on GSM8K, and we provide strong empirical evidence that verification scales more effectively with increased data than a finetuning baseline." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,lightmanlet,\cite{lightmanlet},Let's Verify Step by Step,http://arxiv.org/abs/2305.20050v1,"In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning. However, even state-of-the-art models still regularly produce logical mistakes. To train more reliable models, we can turn either to outcome supervision, which provides feedback for a final result, or process supervision, which provides feedback for each intermediate reasoning step. Given the importance of training reliable models, and given the high cost of human feedback, it is important to carefully compare the both methods. Recent work has already begun this comparison, but many questions still remain. We conduct our own investigation, finding that process supervision significantly outperforms outcome supervision for training models to solve problems from the challenging MATH dataset. Our process-supervised model solves 78% of problems from a representative subset of the MATH test set. Additionally, we show that active learning significantly improves the efficacy of process supervision. To support related research, we also release PRM800K, the complete dataset of 800,000 step-level human feedback labels used to train our best reward model.",True,True,"Lightman, Hunter and Kosaraju, Vineet and Burda, Yuri and Edwards, Harrison and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl",,,,,,Let's Verify Step by Step,Let's Verify Step by Step,http://arxiv.org/pdf/2305.20050v1,"In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning. However, even state-of-the-art models still regularly produce logical mistakes. To train more reliable models, we can turn either to outcome supervision, which provides feedback for a final result, or process supervision, which provides feedback for each intermediate reasoning step. Given the importance of training reliable models, and given the high cost of human feedback, it is important to carefully compare the both methods. Recent work has already begun this comparison, but many questions still remain. We conduct our own investigation, finding that process supervision significantly outperforms outcome supervision for training models to solve problems from the challenging MATH dataset. Our process-supervised model solves 78% of problems from a representative subset of the MATH test set. Additionally, we show that active learning significantly improves the efficacy of process supervision. To support related research, we also release PRM800K, the complete dataset of 800,000 step-level human feedback labels used to train our best reward model." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,mahan2024generative,\cite{mahan2024generative},Generative Reward Models,http://arxiv.org/abs/2410.12832v1,"Reinforcement Learning from Human Feedback (RLHF) has greatly improved the performance of modern Large Language Models (LLMs). The RLHF process is resource-intensive and technically challenging, generally requiring a large collection of human preference labels over model-generated outputs. Reinforcement Learning from AI Feedback (RLAIF) addresses this data collection challenge by leveraging synthetic preferences generated by an LLM. However, recent work has shown that synthetic preferences labels may not align well with human preference judgments. To address this, we propose a hybrid approach that unifies RLHF and RLAIF methodologies. We introduce GenRM, an iterative algorithm that trains an LLM on self-generated reasoning traces, leading to synthetic preference labels matching human preference judgments. Empirically, we show that zero-shot LLM-based judgments under-perform compared to Bradley-Terry reward models on in-distribution tasks (between 9-36%). In contrast, GenRM achieves in-distribution accuracy comparable to Bradley-Terry models, while significantly outperforming them on out-of-distribution tasks (between 10-45%). Moreover, GenRM surpasses the performance of using LLMs as judges on both in-distribution (by 9-31%) and out-of-distribution tasks (by 2- 6%). Our results show that combining the strengths of RLHF and RLAIF offers a promising approach for improving the quality of synthetic preference labels.",True,True,"Mahan, Dakota and Van Phung, Duy and Rafailov, Rafael and Blagden, Chase and Lile, Nathan and Castricato, Louis and Fr{\""a}nken, Jan-Philipp and Finn, Chelsea and Albalak, Alon",2024.0,,,,arXiv preprint arXiv:2410.12832,Generative Reward Models,Generative Reward Models,http://arxiv.org/pdf/2410.12832v1,"Reinforcement Learning from Human Feedback (RLHF) has greatly improved the performance of modern Large Language Models (LLMs). The RLHF process is resource-intensive and technically challenging, generally requiring a large collection of human preference labels over model-generated outputs. Reinforcement Learning from AI Feedback (RLAIF) addresses this data collection challenge by leveraging synthetic preferences generated by an LLM. However, recent work has shown that synthetic preferences labels may not align well with human preference judgments. To address this, we propose a hybrid approach that unifies RLHF and RLAIF methodologies. We introduce GenRM, an iterative algorithm that trains an LLM on self-generated reasoning traces, leading to synthetic preference labels matching human preference judgments. Empirically, we show that zero-shot LLM-based judgments under-perform compared to Bradley-Terry reward models on in-distribution tasks (between 9-36%). In contrast, GenRM achieves in-distribution accuracy comparable to Bradley-Terry models, while significantly outperforming them on out-of-distribution tasks (between 10-45%). Moreover, GenRM surpasses the performance of using LLMs as judges on both in-distribution (by 9-31%) and out-of-distribution tasks (by 2- 6%). Our results show that combining the strengths of RLHF and RLAIF offers a promising approach for improving the quality of synthetic preference labels." "Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User",2504.20458v1,zhang24generative,\cite{zhang24generative},Generative Verifiers: Reward Modeling as Next-Token Prediction,http://arxiv.org/abs/2408.15240v3,"Verifiers or reward models are often used to enhance the reasoning performance of large language models (LLMs). A common approach is the Best-of-N method, where N candidate solutions generated by the LLM are ranked by a verifier, and the best one is selected. While LLM-based verifiers are typically trained as discriminative classifiers to score solutions, they do not utilize the text generation capabilities of pretrained LLMs. To overcome this limitation, we instead propose training verifiers using the ubiquitous next-token prediction objective, jointly on verification and solution generation. Compared to standard verifiers, such generative verifiers (GenRM) can benefit from several advantages of LLMs: they integrate seamlessly with instruction tuning, enable chain-of-thought reasoning, and can utilize additional test-time compute via majority voting for better verification. We demonstrate that GenRM outperforms discriminative, DPO verifiers, and LLM-as-a-Judge, resulting in large performance gains with Best-of-N, namely 5% $\rightarrow$ 45.3% on algorithmic tasks and 73% $\rightarrow$ 93.4% on GSM8K. In easy-to-hard generalization settings, we observe improvements of 28% $\rightarrow$ 44.6% on MATH, and 37.9% $\rightarrow$ 53.5% on MMLU abstract algebra. Furthermore, we find that training GenRM with synthetic verification rationales is sufficient to pick out subtle errors on math problems. Finally, we demonstrate that GenRM scales favorably with model size and test-time compute.",True,True,"Zhang, Lunjun and Hosseini, Arian and Bansal, Hritik and Kazemi, Mehran and Kumar, Aviral and Agarwal, Rishabh",,,,,,Generative Verifiers: Reward Modeling as Next-Token Prediction,[PDF] Generative Verifiers: Reward Modeling as Next-Token Prediction,https://arxiv.org/pdf/2408.15240,"Verifiers or reward models are often used to enhance the reasoning performance of large language models. (LLMs). A common approach is the Best-of-N method," "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,zhao2018deep,\cite{zhao2018deep},Deep Reinforcement Learning for Page-wise Recommendations,http://arxiv.org/abs/1805.02343v2,"Recommender systems can mitigate the information overload problem by suggesting users' personalized items. In real-world recommendations such as e-commerce, a typical interaction between the system and its users is -- users are recommended a page of items and provide feedback; and then the system recommends a new page of items. To effectively capture such interaction for recommendations, we need to solve two key problems -- (1) how to update recommending strategy according to user's \textit{real-time feedback}, and 2) how to generate a page of items with proper display, which pose tremendous challenges to traditional recommender systems. In this paper, we study the problem of page-wise recommendations aiming to address aforementioned two challenges simultaneously. In particular, we propose a principled approach to jointly generate a set of complementary items and the corresponding strategy to display them in a 2-D page; and propose a novel page-wise recommendation framework based on deep reinforcement learning, DeepPage, which can optimize a page of items with proper display based on real-time feedback from users. The experimental results based on a real-world e-commerce dataset demonstrate the effectiveness of the proposed framework.",True,True,"Zhao, Xiangyu and Xia, Long and Zhang, Liang and Ding, Zhuoye and Yin, Dawei and Tang, Jiliang",2018.0,,,,,Deep Reinforcement Learning for Page-wise Recommendations,Deep Reinforcement Learning for Page-wise Recommendations,https://arxiv.org/abs/1805.02343,"A novel page-wise recommendation framework based on deep reinforcement learning, DeepPage, which can optimize a page of items with proper display based on real" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,zhao2018recommendations,\cite{zhao2018recommendations},"Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning",http://arxiv.org/abs/1802.06501v3,"Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.",True,True,"Zhao, Xiangyu and Zhang, Liang and Ding, Zhuoye and Xia, Long and Tang, Jiliang and Yin, Dawei",2018.0,,,,,"Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning",[PDF] Recommendations with Negative Feedback via Pairwise Deep ...,https://zhaoxyai.github.io/paper/kdd2018.pdf,"We propose a deep reinforcement learning based framework. DEERS, which can automatically learn the optimal recom- mendation strategies by incorporating positive" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2023multi,\cite{liu2023multi},Multi-Task Recommendations with Reinforcement Learning,http://arxiv.org/abs/2302.03328v2,"In recent years, Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications. However, current MTL-based recommendation models tend to disregard the session-wise patterns of user-item interactions because they are predominantly constructed based on item-wise datasets. Moreover, balancing multiple objectives has always been a challenge in this field, which is typically avoided via linear estimations in existing works. To address these issues, in this paper, we propose a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using dynamic weights. To be specific, the RMTL structure can address the two aforementioned issues by (i) constructing an MTL environment from session-wise interactions and (ii) training multi-task actor-critic network structure, which is compatible with most existing MTL-based recommendation models, and (iii) optimizing and fine-tuning the MTL loss function using the weights generated by critic networks. Experiments on two real-world public datasets demonstrate the effectiveness of RMTL with a higher AUC against state-of-the-art MTL-based recommendation models. Additionally, we evaluate and validate RMTL's compatibility and transferability across various MTL models.",True,True,"Liu, Ziru and Tian, Jiejie and Cai, Qingpeng and Zhao, Xiangyu and Gao, Jingtong and Liu, Shuchang and Chen, Dayou and He, Tonghao and Zheng, Dong and Jiang, Peng and others",2023.0,,,,,Multi-Task Recommendations with Reinforcement Learning,Multi-Task Recommendations with Reinforcement Learning - arXiv,https://arxiv.org/abs/2302.03328,"In this paper, we propose a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wang2023multi,\cite{wang2023multi},Multi-Task Deep Recommender Systems: A Survey,http://arxiv.org/abs/2302.03525v2,"Multi-task learning (MTL) aims at learning related tasks in a unified model to achieve mutual improvement among tasks considering their shared knowledge. It is an important topic in recommendation due to the demand for multi-task prediction considering performance and efficiency. Although MTL has been well studied and developed, there is still a lack of systematic review in the recommendation community. To fill the gap, we provide a comprehensive review of existing multi-task deep recommender systems (MTDRS) in this survey. To be specific, the problem definition of MTDRS is first given, and it is compared with other related areas. Next, the development of MTDRS is depicted and the taxonomy is introduced from the task relation and methodology aspects. Specifically, the task relation is categorized into parallel, cascaded, and auxiliary with main, while the methodology is grouped into parameter sharing, optimization, and training mechanism. The survey concludes by summarizing the application and public datasets of MTDRS and highlighting the challenges and future directions of the field.",True,True,"Wang, Yuhao and Lam, Ha Tsz and Wong, Yi and Liu, Ziru and Zhao, Xiangyu and Wang, Yichao and Chen, Bo and Guo, Huifeng and Tang, Ruiming",2023.0,,,,arXiv preprint arXiv:2302.03525,Multi-Task Deep Recommender Systems: A Survey,Multi-Task Deep Recommender Systems: A Survey,https://arxiv.org/abs/2302.03525,by Y Wang · 2023 · Cited by 58 — We provide a comprehensive review of existing multi-task deep recommender systems (MTDRS) in this survey. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wang2023single,\cite{wang2023single},Single-shot feature selection for multi-task recommendations,,,True,False,"Wang, Yejing and Du, Zhaocheng and Zhao, Xiangyu and Chen, Bo and Guo, Huifeng and Tang, Ruiming and Dong, Zhenhua",2023.0,,,,,Single-shot feature selection for multi-task recommendations,Single-shot Feature Selection for Multi-task Recommendations,https://dl.acm.org/doi/pdf/10.1145/3539618.3591767,"To this end, this paper proposes a novel Single-shot Feature Selection framework for MTRSs, referred to as MultiSFS, which is capable of" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2024multimodal,\cite{liu2024multimodal},Multimodal Recommender Systems: A Survey,http://arxiv.org/abs/2302.03883v2,"The recommender system (RS) has been an integral toolkit of online services. They are equipped with various deep learning techniques to model user preference based on identifier and attribute information. With the emergence of multimedia services, such as short videos, news and etc., understanding these contents while recommending becomes critical. Besides, multimodal features are also helpful in alleviating the problem of data sparsity in RS. Thus, Multimodal Recommender System (MRS) has attracted much attention from both academia and industry recently. In this paper, we will give a comprehensive survey of the MRS models, mainly from technical views. First, we conclude the general procedures and major challenges for MRS. Then, we introduce the existing MRS models according to four categories, i.e., Modality Encoder, Feature Interaction, Feature Enhancement and Model Optimization. Besides, to make it convenient for those who want to research this field, we also summarize the dataset and code resources. Finally, we discuss some promising future directions of MRS and conclude this paper. To access more details of the surveyed papers, such as implementation code, we open source a repository.",True,True,"Liu, Qidong and Hu, Jiaxi and Xiao, Yutian and Zhao, Xiangyu and Gao, Jingtong and Wang, Wanyu and Li, Qing and Tang, Jiliang",2024.0,,,,ACM Computing Surveys,Multimodal Recommender Systems: A Survey,Multimodal Recommender Systems: A Survey,http://arxiv.org/pdf/2302.03883v2,"The recommender system (RS) has been an integral toolkit of online services. They are equipped with various deep learning techniques to model user preference based on identifier and attribute information. With the emergence of multimedia services, such as short videos, news and etc., understanding these contents while recommending becomes critical. Besides, multimodal features are also helpful in alleviating the problem of data sparsity in RS. Thus, Multimodal Recommender System (MRS) has attracted much attention from both academia and industry recently. In this paper, we will give a comprehensive survey of the MRS models, mainly from technical views. First, we conclude the general procedures and major challenges for MRS. Then, we introduce the existing MRS models according to four categories, i.e., Modality Encoder, Feature Interaction, Feature Enhancement and Model Optimization. Besides, to make it convenient for those who want to research this field, we also summarize the dataset and code resources. Finally, we discuss some promising future directions of MRS and conclude this paper. To access more details of the surveyed papers, such as implementation code, we open source a repository." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,chen2024survey,\cite{chen2024survey},A Survey on Cross-Domain Sequential Recommendation,http://arxiv.org/abs/2401.04971v4,"Cross-domain sequential recommendation (CDSR) shifts the modeling of user preferences from flat to stereoscopic by integrating and learning interaction information from multiple domains at different granularities (ranging from inter-sequence to intra-sequence and from single-domain to cross-domain). In this survey, we first define the CDSR problem using a four-dimensional tensor and then analyze its multi-type input representations under multidirectional dimensionality reductions. Following that, we provide a systematic overview from both macro and micro views. From a macro view, we abstract the multi-level fusion structures of various models across domains and discuss their bridges for fusion. From a micro view, focusing on the existing models, we first discuss the basic technologies and then explain the auxiliary learning technologies. Finally, we exhibit the available public datasets and the representative experimental results as well as provide some insights into future directions for research in CDSR.",True,True,"Chen, Shu and Xu, Zitao and Pan, Weike and Yang, Qiang and Ming, Zhong",2024.0,,,,arXiv preprint arXiv:2401.04971,A Survey on Cross-Domain Sequential Recommendation,[PDF] A Survey on Cross-Domain Sequential Recommendation - IJCAI,https://www.ijcai.org/proceedings/2024/0884.pdf,Cross-domain sequential recommendation (CDSR) shifts the modeling of user preferences from flat to stereoscopic by integrating and learning inter-. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,li2022gromov,\cite{li2022gromov},Gromov-wasserstein guided representation learning for cross-domain recommendation,,,True,False,"Li, Xinhang and Qiu, Zhaopeng and Zhao, Xiangyu and Wang, Zihao and Zhang, Yong and Xing, Chunxiao and Wu, Xian",2022.0,,,,,Gromov-wasserstein guided representation learning for cross-domain recommendation,HestiaSky - GitHub,https://github.com/HestiaSky,GWCDR Public. Repo of CIKM2022 Paper Gromov-Wasserstein Guided Representation Learning forCross-Domain Recommendation. Python 9 1 · OpenSiteRec OpenSiteRec "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wang2023plate,\cite{wang2023plate},PLATE: A prompt-enhanced paradigm for multi-scenario recommendations,,,True,False,"Wang, Yuhao and Zhao, Xiangyu and Chen, Bo and Liu, Qidong and Guo, Huifeng and Liu, Huanshuo and Wang, Yichao and Zhang, Rui and Tang, Ruiming",2023.0,,,,,PLATE: A prompt-enhanced paradigm for multi-scenario recommendations,PLATE: A Prompt-Enhanced Paradigm for Multi-Scenario ...,https://dl.acm.org/doi/10.1145/3539618.3591750,"In this work, we propose a novel prompt-enhanced paradigm for multi-scenario recommendation. Specifically, a unified DRS backbone model is first" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,li2023hamur,\cite{li2023hamur},HAMUR: Hyper Adapter for Multi-Domain Recommendation,http://arxiv.org/abs/2309.06217v1,"Multi-Domain Recommendation (MDR) has gained significant attention in recent years, which leverages data from multiple domains to enhance their performance concurrently.However, current MDR models are confronted with two limitations. Firstly, the majority of these models adopt an approach that explicitly shares parameters between domains, leading to mutual interference among them. Secondly, due to the distribution differences among domains, the utilization of static parameters in existing methods limits their flexibility to adapt to diverse domains. To address these challenges, we propose a novel model Hyper Adapter for Multi-Domain Recommendation (HAMUR). Specifically, HAMUR consists of two components: (1). Domain-specific adapter, designed as a pluggable module that can be seamlessly integrated into various existing multi-domain backbone models, and (2). Domain-shared hyper-network, which implicitly captures shared information among domains and dynamically generates the parameters for the adapter. We conduct extensive experiments on two public datasets using various backbone networks. The experimental results validate the effectiveness and scalability of the proposed model.",True,True,"Li, Xiaopeng and Yan, Fan and Zhao, Xiangyu and Wang, Yichao and Chen, Bo and Guo, Huifeng and Tang, Ruiming",2023.0,,,,,HAMUR: Hyper Adapter for Multi-Domain Recommendation,HAMUR: Hyper Adapter for Multi-Domain Recommendation,https://dl.acm.org/doi/pdf/10.1145/3583780.3615137,"by X Li · 2023 · Cited by 39 — HAMUR is a Hyper Adapter for Multi-Domain Recommendation, consisting of a domain-specific adapter and a domain-shared hyper-network." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,gao2023autotransfer,\cite{gao2023autotransfer},AutoTransfer: Instance transfer for cross-domain recommendations,,,True,False,"Gao, Jingtong and Zhao, Xiangyu and Chen, Bo and Yan, Fan and Guo, Huifeng and Tang, Ruiming",2023.0,,,,,AutoTransfer: Instance transfer for cross-domain recommendations,Instance Transfer for Cross-Domain Recommendations,https://dl.acm.org/doi/pdf/10.1145/3539618.3591701,"by J Gao · 2023 · Cited by 28 — Specifically, AutoTransfer acts as an agent that adaptively selects a subset of informative and transferable instances from the source domain." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,zhang2024m3oe,\cite{zhang2024m3oe},"M3oE: Multi-Domain Multi-Task Mixture-of Experts Recommendation Framework",http://arxiv.org/abs/2404.18465v3,"Multi-domain recommendation and multi-task recommendation have demonstrated their effectiveness in leveraging common information from different domains and objectives for comprehensive user modeling. Nonetheless, the practical recommendation usually faces multiple domains and tasks simultaneously, which cannot be well-addressed by current methods. To this end, we introduce M3oE, an adaptive Multi-domain Multi-task Mixture-of-Experts recommendation framework. M3oE integrates multi-domain information, maps knowledge across domains and tasks, and optimizes multiple objectives. We leverage three mixture-of-experts modules to learn common, domain-aspect, and task-aspect user preferences respectively to address the complex dependencies among multiple domains and tasks in a disentangled manner. Additionally, we design a two-level fusion mechanism for precise control over feature extraction and fusion across diverse domains and tasks. The framework's adaptability is further enhanced by applying AutoML technique, which allows dynamic structure optimization. To the best of the authors' knowledge, our M3oE is the first effort to solve multi-domain multi-task recommendation self-adaptively. Extensive experiments on two benchmark datasets against diverse baselines demonstrate M3oE's superior performance. The implementation code is available to ensure reproducibility.",True,True,"Zhang, Zijian and Liu, Shuchang and Yu, Jiaao and Cai, Qingpeng and Zhao, Xiangyu and Zhang, Chunxu and Liu, Ziru and Liu, Qidong and Zhao, Hongwei and Hu, Lantao and others",2024.0,,,,,"M3oE: Multi-Domain Multi-Task Mixture-of Experts Recommendation Framework",[Literature Review] M3oE: Multi-Domain Multi-Task Mixture-of ...,https://www.themoonlight.io/en/review/m3oe-multi-domain-multi-task-mixture-of-experts-recommendation-framework,"The M3oE framework addresses the Multi-Domain Multi-Task (MDMT) recommendation problem, which aims to generate personalized item recommendations by maximizing" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,jia2024d3,\cite{jia2024d3},"D3: A Methodological Exploration of Domain Division, Modeling, and Balance in Multi-Domain Recommendations",,,True,False,"Jia, Pengyue and Wang, Yichao and Lin, Shanru and Li, Xiaopeng and Zhao, Xiangyu and Guo, Huifeng and Tang, Ruiming",2024.0,,,,,"D3: A Methodological Exploration of Domain Division, Modeling, and Balance in Multi-Domain Recommendations","D3: A Methodological Exploration of Domain Division, Modeling ...",https://www.researchgate.net/publication/379285737_D3_A_Methodological_Exploration_of_Domain_Division_Modeling_and_Balance_in_Multi-Domain_Recommendations?_tp=eyJjb250ZXh0Ijp7InBhZ2UiOiJzY2llbnRpZmljQ29udHJpYnV0aW9ucyIsInByZXZpb3VzUGFnZSI6bnVsbCwic3ViUGFnZSI6bnVsbH19,"To address these challenges, this paper proposes a universal and flexible framework D3 aimed at optimizing the multi-domain recommendation pipeline from three" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2024multifs,\cite{liu2024multifs},MultiFS: Automated multi-scenario feature selection in deep recommender systems,,,True,False,"Liu, Dugang and Yang, Chaohua and Tang, Xing and Wang, Yejing and Lyu, Fuyuan and Luo, Weihong and He, Xiuqiang and Ming, Zhong and Zhao, Xiangyu",2024.0,,,,,MultiFS: Automated multi-scenario feature selection in deep recommender systems,Automated Multi-Scenario Feature Selection in Deep Recommender ...,https://scholars.cityu.edu.hk/en/publications/multifs(59f43c18-491f-49be-9f65-8c9d9edadea7)/projects.html,"MultiFS: Automated Multi-Scenario Feature Selection in Deep Recommender Systems ; Research output: ; Chapters, Conference Papers, Creative and Literary Works › ; ›" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2025sigma,\cite{liu2025sigma},SIGMA: Selective Gated Mamba for Sequential Recommendation,http://arxiv.org/abs/2408.11451v4,"In various domains, Sequential Recommender Systems (SRS) have become essential due to their superior capability to discern intricate user preferences. Typically, SRS utilize transformer-based architectures to forecast the subsequent item within a sequence. Nevertheless, the quadratic computational complexity inherent in these models often leads to inefficiencies, hindering the achievement of real-time recommendations. Mamba, a recent advancement, has exhibited exceptional performance in time series prediction, significantly enhancing both efficiency and accuracy. However, integrating Mamba directly into SRS poses several challenges. Its inherently unidirectional nature may constrain the model's capacity to capture the full context of user-item interactions, while its instability in state estimation can compromise its ability to detect short-term patterns within interaction sequences. To overcome these issues, we introduce a new framework named Selective Gated Mamba (SIGMA) for Sequential Recommendation. This framework leverages a Partially Flipped Mamba (PF-Mamba) to construct a bidirectional architecture specifically tailored to improve contextual modeling. Additionally, an input-sensitive Dense Selective Gate (DS Gate) is employed to optimize directional weights and enhance the processing of sequential information in PF-Mamba. For short sequence modeling, we have also developed a Feature Extract GRU (FE-GRU) to efficiently capture short-term dependencies. Empirical results indicate that SIGMA outperforms current models on five real-world datasets. Our implementation code is available at https://github.com/ziwliu-cityu/SIMGA to ease reproducibility.",True,True,"Liu, Ziwei and Liu, Qidong and Wang, Yejing and Wang, Wanyu and Jia, Pengyue and Wang, Maolin and Liu, Zitao and Chang, Yi and Zhao, Xiangyu",2025.0,,,,,SIGMA: Selective Gated Mamba for Sequential Recommendation,SIGMA: Selective Gated Mamba for Sequential Recommendation,http://arxiv.org/pdf/2408.11451v4,"In various domains, Sequential Recommender Systems (SRS) have become essential due to their superior capability to discern intricate user preferences. Typically, SRS utilize transformer-based architectures to forecast the subsequent item within a sequence. Nevertheless, the quadratic computational complexity inherent in these models often leads to inefficiencies, hindering the achievement of real-time recommendations. Mamba, a recent advancement, has exhibited exceptional performance in time series prediction, significantly enhancing both efficiency and accuracy. However, integrating Mamba directly into SRS poses several challenges. Its inherently unidirectional nature may constrain the model's capacity to capture the full context of user-item interactions, while its instability in state estimation can compromise its ability to detect short-term patterns within interaction sequences. To overcome these issues, we introduce a new framework named Selective Gated Mamba (SIGMA) for Sequential Recommendation. This framework leverages a Partially Flipped Mamba (PF-Mamba) to construct a bidirectional architecture specifically tailored to improve contextual modeling. Additionally, an input-sensitive Dense Selective Gate (DS Gate) is employed to optimize directional weights and enhance the processing of sequential information in PF-Mamba. For short sequence modeling, we have also developed a Feature Extract GRU (FE-GRU) to efficiently capture short-term dependencies. Empirical results indicate that SIGMA outperforms current models on five real-world datasets. Our implementation code is available at https://github.com/ziwliu-cityu/SIMGA to ease reproducibility." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2023diffuasr,\cite{liu2023diffuasr},Diffusion Augmentation for Sequential Recommendation,http://arxiv.org/abs/2309.12858v1,"Sequential recommendation (SRS) has become the technical foundation in many applications recently, which aims to recommend the next item based on the user's historical interactions. However, sequential recommendation often faces the problem of data sparsity, which widely exists in recommender systems. Besides, most users only interact with a few items, but existing SRS models often underperform these users. Such a problem, named the long-tail user problem, is still to be resolved. Data augmentation is a distinct way to alleviate these two problems, but they often need fabricated training strategies or are hindered by poor-quality generated interactions. To address these problems, we propose a Diffusion Augmentation for Sequential Recommendation (DiffuASR) for a higher quality generation. The augmented dataset by DiffuASR can be used to train the sequential recommendation models directly, free from complex training procedures. To make the best of the generation ability of the diffusion model, we first propose a diffusion-based pseudo sequence generation framework to fill the gap between image and sequence generation. Then, a sequential U-Net is designed to adapt the diffusion noise prediction model U-Net to the discrete sequence generation task. At last, we develop two guide strategies to assimilate the preference between generated and origin sequences. To validate the proposed DiffuASR, we conduct extensive experiments on three real-world datasets with three sequential recommendation models. The experimental results illustrate the effectiveness of DiffuASR. As far as we know, DiffuASR is one pioneer that introduce the diffusion model to the recommendation.",True,True,"Liu, Qidong and Yan, Fan and Zhao, Xiangyu and Du, Zhaocheng and Guo, Huifeng and Tang, Ruiming and Tian, Feng",2023.0,,,,,Diffusion Augmentation for Sequential Recommendation,Diffusion Augmentation for Sequential Recommendation,http://arxiv.org/pdf/2309.12858v1,"Sequential recommendation (SRS) has become the technical foundation in many applications recently, which aims to recommend the next item based on the user's historical interactions. However, sequential recommendation often faces the problem of data sparsity, which widely exists in recommender systems. Besides, most users only interact with a few items, but existing SRS models often underperform these users. Such a problem, named the long-tail user problem, is still to be resolved. Data augmentation is a distinct way to alleviate these two problems, but they often need fabricated training strategies or are hindered by poor-quality generated interactions. To address these problems, we propose a Diffusion Augmentation for Sequential Recommendation (DiffuASR) for a higher quality generation. The augmented dataset by DiffuASR can be used to train the sequential recommendation models directly, free from complex training procedures. To make the best of the generation ability of the diffusion model, we first propose a diffusion-based pseudo sequence generation framework to fill the gap between image and sequence generation. Then, a sequential U-Net is designed to adapt the diffusion noise prediction model U-Net to the discrete sequence generation task. At last, we develop two guide strategies to assimilate the preference between generated and origin sequences. To validate the proposed DiffuASR, we conduct extensive experiments on three real-world datasets with three sequential recommendation models. The experimental results illustrate the effectiveness of DiffuASR. As far as we know, DiffuASR is one pioneer that introduce the diffusion model to the recommendation." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,li2023strec,\cite{li2023strec},STRec: Sparse transformer for sequential recommendations,,,True,False,"Li, Chengxi and Wang, Yejing and Liu, Qidong and Zhao, Xiangyu and Wang, Wanyu and Wang, Yiqi and Zou, Lixin and Fan, Wenqi and Li, Qing",2023.0,,,,,STRec: Sparse transformer for sequential recommendations,STRec: Sparse Transformer for Sequential Recommendations,https://dl.acm.org/doi/10.1145/3604915.3608779,"In this paper, we identify the sparse attention phenomenon in transformer-based SRS models and propose Sparse Transformer for sequential Recommendation tasks (" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2023dirac,\cite{liu2023dirac},Disentangling interest and conformity for eliminating popularity bias in session-based recommendation,,,True,False,"Liu, Qidong and Tian, Feng and Zheng, Qinghua and Wang, Qianying",2023.0,,,,Knowledge and Information Systems,Disentangling interest and conformity for eliminating popularity bias in session-based recommendation,Disentangling interest and conformity for eliminating popularity bias ...,https://scholars.cityu.edu.hk/en/publications/disentangling-interest-and-conformity-for-eliminating-popularity-bias-in-sessionbased-recommendation(8b5f5a40-1b51-4d7d-88ab-a697835856bc)/fingerprints.html,"Disentangling interest and conformity for eliminating popularity bias in session-based recommendation. Qidong Liu, Feng Tian*, Qinghua Zheng, Qianying Wang." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wu2022graph,\cite{wu2022graph},Graph neural networks in recommender systems: a survey,,,True,False,"Wu, Shiwen and Sun, Fei and Zhang, Wentao and Xie, Xu and Cui, Bin",2022.0,,,,ACM Computing Surveys,Graph neural networks in recommender systems: a survey,Graph Neural Networks in Recommender Systems: A Survey,http://arxiv.org/pdf/2011.02260v4,"With the explosive growth of online information, recommender systems play a key role to alleviate such information overload. Due to the important application value of recommender systems, there have always been emerging works in this field. In recommender systems, the main challenge is to learn the effective user/item representations from their interactions and side information (if any). Recently, graph neural network (GNN) techniques have been widely utilized in recommender systems since most of the information in recommender systems essentially has graph structure and GNN has superiority in graph representation learning. This article aims to provide a comprehensive review of recent research efforts on GNN-based recommender systems. Specifically, we provide a taxonomy of GNN-based recommendation models according to the types of information used and recommendation tasks. Moreover, we systematically analyze the challenges of applying GNN on different types of data and discuss how existing works in this field address these challenges. Furthermore, we state new perspectives pertaining to the development of this field. We collect the representative papers along with their open-source implementations in https://github.com/wusw14/GNN-in-RS." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,cao2022contrastive,\cite{cao2022contrastive},Contrastive Cross-Domain Sequential Recommendation,http://arxiv.org/abs/2304.03891v1,"Cross-Domain Sequential Recommendation (CDSR) aims to predict future interactions based on user's historical sequential interactions from multiple domains. Generally, a key challenge of CDSR is how to mine precise cross-domain user preference based on the intra-sequence and inter-sequence item interactions. Existing works first learn single-domain user preference only with intra-sequence item interactions, and then build a transferring module to obtain cross-domain user preference. However, such a pipeline and implicit solution can be severely limited by the bottleneck of the designed transferring module, and ignores to consider inter-sequence item relationships. In this paper, we propose C^2DSR to tackle the above problems to capture precise user preferences. The main idea is to simultaneously leverage the intra- and inter- sequence item relationships, and jointly learn the single- and cross- domain user preferences. Specifically, we first utilize a graph neural network to mine inter-sequence item collaborative relationship, and then exploit sequential attentive encoder to capture intra-sequence item sequential relationship. Based on them, we devise two different sequential training objectives to obtain user single-domain and cross-domain representations. Furthermore, we present a novel contrastive cross-domain infomax objective to enhance the correlation between single- and cross- domain user representations by maximizing their mutual information. To validate the effectiveness of C^2DSR, we first re-split four e-comerce datasets, and then conduct extensive experiments to demonstrate the effectiveness of our approach C^2DSR.",True,True,"Cao, Jiangxia and Cong, Xin and Sheng, Jiawei and Liu, Tingwen and Wang, Bin",2022.0,,,,,Contrastive Cross-Domain Sequential Recommendation,Contrastive Cross-Domain Sequential Recommendation,https://dl.acm.org/doi/10.1145/3511808.3557262,Cross-Domain Sequential Recommendation (CDSR) aims to predict future interactions based on user's historical sequential interactions from multiple domains. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wang2023unbiased,\cite{wang2023unbiased},"Unbiased and Robust: External Attention-enhanced Graph Contrastive Learning for Cross-domain Sequential Recommendation",http://arxiv.org/abs/2310.04633v3,"Cross-domain sequential recommenders (CSRs) are gaining considerable research attention as they can capture user sequential preference by leveraging side information from multiple domains. However, these works typically follow an ideal setup, i.e., different domains obey similar data distribution, which ignores the bias brought by asymmetric interaction densities (a.k.a. the inter-domain density bias). Besides, the frequently adopted mechanism (e.g., the self-attention network) in sequence encoder only focuses on the interactions within a local view, which overlooks the global correlations between different training batches. To this end, we propose an External Attention-enhanced Graph Contrastive Learning framework, namely EA-GCL. Specifically, to remove the impact of the inter-domain density bias, an auxiliary Self-Supervised Learning (SSL) task is attached to the traditional graph encoder under a multi-task learning manner. To robustly capture users' behavioral patterns, we develop an external attention-based sequence encoder that contains an MLP-based memory-sharing structure. Unlike the self-attention mechanism, such a structure can effectively alleviate the bias interference from the batch-based training scheme. Extensive experiments on two real-world datasets demonstrate that EA-GCL outperforms several state-of-the-art baselines on CSR tasks. The source codes and relevant datasets are available at https://github.com/HoupingY/EA-GCL.",True,True,"Wang, Xinhua and Yue, Houping and Wang, Zizheng and Xu, Liancheng and Zhang, Jinyu",2023.0,,,,,"Unbiased and Robust: External Attention-enhanced Graph Contrastive Learning for Cross-domain Sequential Recommendation",Unbiased and Robust: External Attention-enhanced Graph ...,https://www.researchgate.net/publication/378019593_Unbiased_and_Robust_External_Attention-enhanced_Graph_Contrastive_Learning_for_Cross-domain_Sequential_Recommendation,Graphs. Conference Paper. Unbiased and Robust: External Attention-enhanced Graph Contrastive Learning for Cross-domain Sequential Recommendation. December 2023. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,xu2023multi,\cite{xu2023multi},A multi-view graph contrastive learning framework for cross-domain sequential recommendation,,,True,False,"Xu, Zitao and Pan, Weike and Ming, Zhong",2023.0,,,,,A multi-view graph contrastive learning framework for cross-domain sequential recommendation,[PDF] A Multi-view Graph Contrastive Learning Framework for Cross ...,https://csse.szu.edu.cn/staff/panwk/recommendation/IRT-SupplementaryInformation-CD-SOCCF/MGCL.pdf,We propose a generic contrastive learning framework named multi-view graph contrastive learning (MGCL) for cross-domain sequential recommendation. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,ma2019pi,\cite{ma2019pi},$\pi$-net: A parallel information-sharing network for shared-account cross-domain sequential recommendations,,,True,False,"Ma, Muyang and Ren, Pengjie and Lin, Yujie and Chen, Zhumin and Ma, Jun and Rijke, Maarten de",2019.0,,,,,$\pi$-net: A parallel information-sharing network for shared-account cross-domain sequential recommendations,[PDF] π-Net: A Parallel Information-sharing Network for ...,https://www.semanticscholar.org/paper/%CF%80-Net%3A-A-Parallel-Information-sharing-Network-for-Ma-Ren/fa990aee9a8f157b5d393f5f3eaa014e1e5c67aa,A Parallel Information-sharing Network (π-Net) is proposed to simultaneously generate recommendations for two domains where user behaviors "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,ma2024triple,\cite{ma2024triple},Triple Sequence Learning for Cross-domain Recommendation,http://arxiv.org/abs/2304.05027v2,"Cross-domain recommendation (CDR) aims to leverage the correlation of users' behaviors in both the source and target domains to improve the user preference modeling in the target domain. Conventional CDR methods typically explore the dual-relations between the source and target domains' behaviors. However, this may ignore the informative mixed behaviors that naturally reflect the user's global preference. To address this issue, we present a novel framework, termed triple sequence learning for cross-domain recommendation (Tri-CDR), which jointly models the source, target, and mixed behavior sequences to highlight the global and target preference and precisely model the triple correlation in CDR. Specifically, Tri-CDR independently models the hidden representations for the triple behavior sequences and proposes a triple cross-domain attention (TCA) method to emphasize the informative knowledge related to both user's global and target-domain preference. To comprehensively explore the cross-domain correlations, we design a triple contrastive learning (TCL) strategy that simultaneously considers the coarse-grained similarities and fine-grained distinctions among the triple sequences, ensuring the alignment while preserving information diversity in multi-domain. We conduct extensive experiments and analyses on six cross-domain settings. The significant improvements of Tri-CDR with different sequential encoders verify its effectiveness and universality. The code will be released upon acceptance.",True,True,"Ma, Haokai and Xie, Ruobing and Meng, Lei and Chen, Xin and Zhang, Xu and Lin, Leyu and Zhou, Jie",2024.0,,,,ACM Transactions on Information Systems,Triple Sequence Learning for Cross-domain Recommendation,Triple Sequence Learning for Cross-domain Recommendation,https://dl.acm.org/doi/10.1145/3638351,"We present a novel framework, termed triple sequence learning for cross-domain recommendation (Tri-CDR), which jointly models the source, target, and mixed" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,xu2024multi,\cite{xu2024multi},"Multi-perspective Improvement of Knowledge Graph Completion with Large Language Models",http://arxiv.org/abs/2403.01972v1,"Knowledge graph completion (KGC) is a widely used method to tackle incompleteness in knowledge graphs (KGs) by making predictions for missing links. Description-based KGC leverages pre-trained language models to learn entity and relation representations with their names or descriptions, which shows promising results. However, the performance of description-based KGC is still limited by the quality of text and the incomplete structure, as it lacks sufficient entity descriptions and relies solely on relation names, leading to sub-optimal results. To address this issue, we propose MPIKGC, a general framework to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs) from various perspectives, which involves leveraging the reasoning, explanation, and summarization capabilities of LLMs to expand entity descriptions, understand relations, and extract structures, respectively. We conducted extensive evaluation of the effectiveness and improvement of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.",True,True,"Xu, Derong and Zhang, Ziheng and Lin, Zhenxi and Wu, Xian and Zhu, Zhihong and Xu, Tong and Zhao, Xiangyu and Zheng, Yefeng and Chen, Enhong",2024.0,,,,arXiv preprint arXiv:2403.01972,"Multi-perspective Improvement of Knowledge Graph Completion with Large Language Models",Multi-perspective Improvement of Knowledge Graph Completion ...,https://aclanthology.org/2024.lrec-main.1044/,"Derong Xu, Ziheng Zhang, Zhenxi Lin, Xian Wu, Zhihong Zhu, Tong Xu, Xiangyu Zhao, Yefeng Zheng, Enhong Chen Anthology ID:2024.lrec-main.1044 Volume:Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)Month:May Year:2024 Address:Torino, Italia Editors:Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen XueVenues:LREC | COLINGSIG:Publisher:ELRA and ICCL Note:Pages:11956–11968 Language:URL:https://aclanthology.org/2024.lrec-main.1044/DOI:Bibkey:xu-etal-2024-multi Cite (ACL):Derong Xu, Ziheng Zhang, Zhenxi Lin, Xian Wu, Zhihong Zhu, Tong Xu, Xiangyu Zhao, Yefeng Zheng, and Enhong Chen. ELRA and ICCL.Cite (Informal):Multi-perspective Improvement of Knowledge Graph Completion with Large Language Models (Xu et al., LREC-COLING 2024)Copy Citation:BibTeX Markdown MODS XML Endnote More options…PDF:https://aclanthology.org/2024.lrec-main.1044.pdf Derong Xu, Ziheng Zhang, Zhenxi Lin, Xian Wu, Zhihong Zhu, Tong Xu, Xiangyu Zhao, Yefeng Zheng, and Enhong Chen." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2024moelora,\cite{liu2024moelora},"When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications",http://arxiv.org/abs/2310.18339v2,"The recent surge in Large Language Models (LLMs) has garnered significant attention across numerous fields. Fine-tuning is often required to fit general LLMs for a specific domain, like the web-based healthcare system. However, two problems arise during fine-tuning LLMs for medical applications. One is the task variety problem, which involves distinct tasks in real-world medical scenarios. The variety often leads to sub-optimal fine-tuning for data imbalance and seesaw problems. Besides, the large amount of parameters in LLMs leads to huge time and computation consumption by fine-tuning. To address these two problems, we propose a novel parameter efficient fine-tuning framework for multi-task medical applications, dubbed as MOELoRA. The designed framework aims to absorb both the benefits of mixture-of-expert (MOE) for multi-task learning and low-rank adaptation (LoRA) for parameter efficient fine-tuning. For unifying MOE and LoRA, we devise multiple experts as the trainable parameters, where each expert consists of a pair of low-rank matrices to retain the small size of trainable parameters. Then, a task-motivated gate function for all MOELoRA layers is proposed, which can control the contributions of each expert and produce distinct parameters for various tasks. We conduct experiments on a multi-task medical dataset, indicating MOELoRA outperforms the existing parameter efficient fine-tuning methods. The code is available online.",True,True,"Liu, Qidong and Wu, Xian and Zhao, Xiangyu and Zhu, Yuanshao and Xu, Derong and Tian, Feng and Zheng, Yefeng",2024.0,,,,,"When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications",When MOE Meets LLMs: Parameter Efficient Fine-tuning ...,https://arxiv.org/html/2310.18339v2,"We propose a novel parameter efficient fine-tuning framework for multi-task medical applications, dubbed as MOELoRA." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wu2024survey,\cite{wu2024survey},Explainability for Large Language Models: A Survey,http://arxiv.org/abs/2309.01029v3,"Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this paper, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations, and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional machine learning models.",True,True,"Wu, Likang and Zheng, Zhi and Qiu, Zhaopeng and Wang, Hao and Gu, Hongchao and Shen, Tingjia and Qin, Chuan and Zhu, Chen and Zhu, Hengshu and Liu, Qi and others",2024.0,,,,World Wide Web,Explainability for Large Language Models: A Survey,Explainability for Large Language Models: A Survey,http://arxiv.org/pdf/2309.01029v3,"Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this paper, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations, and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional machine learning models." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,lin2023can,\cite{lin2023can},How Can Recommender Systems Benefit from Large Language Models: A Survey,http://arxiv.org/abs/2306.05817v6,"With the rapid development of online services, recommender systems (RS) have become increasingly indispensable for mitigating information overload. Despite remarkable progress, conventional recommendation models (CRM) still have some limitations, e.g., lacking open-world knowledge, and difficulties in comprehending users' underlying preferences and motivations. Meanwhile, large language models (LLM) have shown impressive general intelligence and human-like capabilities, which mainly stem from their extensive open-world knowledge, reasoning ability, as well as their comprehension of human culture and society. Consequently, the emergence of LLM is inspiring the design of recommender systems and pointing out a promising research direction, i.e., whether we can incorporate LLM and benefit from their knowledge and capabilities to compensate for the limitations of CRM. In this paper, we conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems. Specifically, we summarize existing works from two orthogonal aspects: where and how to adapt LLM to RS. For the WHERE question, we discuss the roles that LLM could play in different stages of the recommendation pipeline, i.e., feature engineering, feature encoder, scoring/ranking function, user interaction, and pipeline controller. For the HOW question, we investigate the training and inference strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to tune LLM or not, and whether to involve conventional recommendation models for inference. Then, we highlight key challenges in adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and ethics. Finally, we summarize the survey and discuss the future prospects. We actively maintain a GitHub repository for papers and other related resources: https://github.com/CHIANGEL/Awesome-LLM-for-RecSys/.",True,True,"Lin, Jianghao and Dai, Xinyi and Xi, Yunjia and Liu, Weiwen and Chen, Bo and Zhang, Hao and Liu, Yong and Wu, Chuhan and Li, Xiangyang and Zhu, Chenxu and others",2023.0,,,,arXiv preprint arXiv:2306.05817,How Can Recommender Systems Benefit from Large Language Models: A Survey,How Can Recommender Systems Benefit from Large Language Models: A Survey,http://arxiv.org/pdf/2306.05817v6,"With the rapid development of online services, recommender systems (RS) have become increasingly indispensable for mitigating information overload. Despite remarkable progress, conventional recommendation models (CRM) still have some limitations, e.g., lacking open-world knowledge, and difficulties in comprehending users' underlying preferences and motivations. Meanwhile, large language models (LLM) have shown impressive general intelligence and human-like capabilities, which mainly stem from their extensive open-world knowledge, reasoning ability, as well as their comprehension of human culture and society. Consequently, the emergence of LLM is inspiring the design of recommender systems and pointing out a promising research direction, i.e., whether we can incorporate LLM and benefit from their knowledge and capabilities to compensate for the limitations of CRM. In this paper, we conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems. Specifically, we summarize existing works from two orthogonal aspects: where and how to adapt LLM to RS. For the WHERE question, we discuss the roles that LLM could play in different stages of the recommendation pipeline, i.e., feature engineering, feature encoder, scoring/ranking function, user interaction, and pipeline controller. For the HOW question, we investigate the training and inference strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to tune LLM or not, and whether to involve conventional recommendation models for inference. Then, we highlight key challenges in adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and ethics. Finally, we summarize the survey and discuss the future prospects. We actively maintain a GitHub repository for papers and other related resources: https://github.com/CHIANGEL/Awesome-LLM-for-RecSys/." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,bao2024large,\cite{bao2024large},"Large language models for recommendation: Past, present, and future",,,True,False,"Bao, Keqin and Zhang, Jizhi and Lin, Xinyu and Zhang, Yang and Wang, Wenjie and Feng, Fuli",2024.0,,,,,"Large language models for recommendation: Past, present, and future","Large Language Models for Recommendation: Past, Present ...",https://dl.acm.org/doi/10.1145/3626772.3661383,This tutorial aims to demystify the Large Language Model for Recommendation (LLM4Rec) by reviewing its evolution and delving into cutting-edge research. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wang2024towards,\cite{wang2024towards},"Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond",http://arxiv.org/abs/2410.19744v1,"Large language models (LLMs) have not only revolutionized the field of natural language processing (NLP) but also have the potential to bring a paradigm shift in many other fields due to their remarkable abilities of language understanding, as well as impressive generalization capabilities and reasoning skills. As a result, recent studies have actively attempted to harness the power of LLMs to improve recommender systems, and it is imperative to thoroughly review the recent advances and challenges of LLM-based recommender systems. Unlike existing work, this survey does not merely analyze the classifications of LLM-based recommendation systems according to the technical framework of LLMs. Instead, it investigates how LLMs can better serve recommendation tasks from the perspective of the recommender system community, thus enhancing the integration of large language models into the research of recommender system and its practical application. In addition, the long-standing gap between academic research and industrial applications related to recommender systems has not been well discussed, especially in the era of large language models. In this review, we introduce a novel taxonomy that originates from the intrinsic essence of recommendation, delving into the application of large language model-based recommendation systems and their industrial implementation. Specifically, we propose a three-tier structure that more accurately reflects the developmental progression of recommendation systems from research to practical implementation, including representing and understanding, scheming and utilizing, and industrial deployment. Furthermore, we discuss critical challenges and opportunities in this emerging field. A more up-to-date version of the papers is maintained at: https://github.com/jindongli-Ai/Next-Generation-LLM-based-Recommender-Systems-Survey.",True,True,"Wang, Qi and Li, Jindong and Wang, Shiqi and Xing, Qianli and Niu, Runliang and Kong, He and Li, Rui and Long, Guodong and Chang, Yi and Zhang, Chengqi",2024.0,,,,arXiv preprint arXiv:2410.19744,"Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond",Towards Next-Generation LLM-based Recommender Systems - arXiv,https://arxiv.org/html/2410.19744v1,"Unlike existing work, this survey does not merely analyze the classifications of LLM-based recommendation systems according to the technical framework of LLMs. Instead, it investigates how LLMs can better serve recommendation tasks from the perspective of the recommender system community, thus enhancing the integration of large language models into the research of recommender system and its practical application. With increasing efforts to explore large language model (LLM) methods for recommender systems (Zhu et al., 2024b; Tan et al., 2024; Gao et al., 2023a; Zhang et al., 2023a), several critical issues merit further investigation. (Agrawal et al., 2023) discusses the importance of content metadata in movie recommendation systems, particularly the role of genre labels in understanding user preferences and providing personalized recommendations." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2025llmemb,\cite{liu2025llmemb},"LLMEmb: Large Language Model Can Be a Good Embedding Generator for Sequential Recommendation",http://arxiv.org/abs/2409.19925v2,"Sequential Recommender Systems (SRS), which model a user's interaction history to predict the next item of interest, are widely used in various applications. However, existing SRS often struggle with low-popularity items, a challenge known as the long-tail problem. This issue leads to reduced serendipity for users and diminished profits for sellers, ultimately harming the overall system. Large Language Model (LLM) has the ability to capture semantic relationships between items, independent of their popularity, making it a promising solution to this problem. In this paper, we introduce LLMEmb, a novel method leveraging LLM to generate item embeddings that enhance SRS performance. To bridge the gap between general-purpose LLM and the recommendation domain, we propose a Supervised Contrastive Fine-Tuning (SCFT) approach. This approach includes attribute-level data augmentation and a tailored contrastive loss to make LLM more recommendation-friendly. Additionally, we emphasize the importance of integrating collaborative signals into LLM-generated embeddings, for which we propose Recommendation Adaptation Training (RAT). This further refines the embeddings for optimal use in SRS. The LLMEmb-derived embeddings can be seamlessly integrated with any SRS models, underscoring the practical value. Comprehensive experiments conducted on three real-world datasets demonstrate that LLMEmb significantly outperforms existing methods across multiple SRS models. The code for our method is released online https://github.com/Applied-Machine-Learning-Lab/LLMEmb.",True,True,"Liu, Qidong and Wu, Xian and Wang, Wanyu and Wang, Yejing and Zhu, Yuanshao and Zhao, Xiangyu and Tian, Feng and Zheng, Yefeng",2025.0,,,,,"LLMEmb: Large Language Model Can Be a Good Embedding Generator for Sequential Recommendation",[PDF] LLMEmb: Large Language Model Can Be a Good Embedding ...,https://ojs.aaai.org/index.php/AAAI/article/view/33327/35482,"In this paper, we propose a novel LLM-based generator, i.e.,. LLMEmb, to derive item embeddings for the sequential rec- ommendation. Specifically, to equip the" "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,sun2025llmser,\cite{sun2025llmser},"LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation",http://arxiv.org/abs/2503.12547v2,"Sequential Recommender Systems (SRS) have become a cornerstone of online platforms, leveraging users' historical interaction data to forecast their next potential engagement. Despite their widespread adoption, SRS often grapple with the long-tail user dilemma, resulting in less effective recommendations for individuals with limited interaction records. The advent of Large Language Models (LLMs), with their profound capability to discern semantic relationships among items, has opened new avenues for enhancing SRS through data augmentation. Nonetheless, current methodologies encounter obstacles, including the absence of collaborative signals and the prevalence of hallucination phenomena. In this work, we present LLMSeR, an innovative framework that utilizes Large Language Models (LLMs) to generate pseudo-prior items, thereby improving the efficacy of Sequential Recommender Systems (SRS). To alleviate the challenge of insufficient collaborative signals, we introduce the Semantic Interaction Augmentor (SIA), a method that integrates both semantic and collaborative information to comprehensively augment user interaction data. Moreover, to weaken the adverse effects of hallucination in SRS, we develop the Adaptive Reliability Validation (ARV), a validation technique designed to assess the reliability of the generated pseudo items. Complementing these advancements, we also devise a Dual-Channel Training strategy, ensuring seamless integration of data augmentation into the SRS training process.Extensive experiments conducted with three widely-used SRS models demonstrate the generalizability and efficacy of LLMSeR.",True,True,"Sun, Yuqi and Liu, Qidong and Zhu, Haiping and Tian, Feng",2025.0,,,,arXiv preprint arXiv:2503.12547,"LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation",LLMSeR: Enhancing Sequential Recommendation via LLM ...,https://arxiv.org/html/2503.12547,"To address the issue of missing collaborative information in LLM-based data augmentation methods, we propose the Semantic Interaction Augmentor." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2024leader,\cite{liu2024leader},Large Language Model Distilling Medication Recommendation Model,http://arxiv.org/abs/2402.02803v2,"The recommendation of medication is a vital aspect of intelligent healthcare systems, as it involves prescribing the most suitable drugs based on a patient's specific health needs. Unfortunately, many sophisticated models currently in use tend to overlook the nuanced semantics of medical data, while only relying heavily on identities. Furthermore, these models face significant challenges in handling cases involving patients who are visiting the hospital for the first time, as they lack prior prescription histories to draw upon. To tackle these issues, we harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs). Our research aims to transform existing medication recommendation methodologies using LLMs. In this paper, we introduce a novel approach called Large Language Model Distilling Medication Recommendation (LEADER). We begin by creating appropriate prompt templates that enable LLMs to suggest medications effectively. However, the straightforward integration of LLMs into recommender systems leads to an out-of-corpus issue specific to drugs. We handle it by adapting the LLMs with a novel output layer and a refined tuning loss function. Although LLM-based models exhibit remarkable capabilities, they are plagued by high computational costs during inference, which is impractical for the healthcare sector. To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model. Extensive experiments conducted on two real-world datasets, MIMIC-III and MIMIC-IV, demonstrate that our proposed model not only delivers effective results but also is efficient. To ease the reproducibility of our experiments, we release the implementation code online.",True,True,"Liu, Qidong and Wu, Xian and Zhao, Xiangyu and Zhu, Yuanshao and Zhang, Zijian and Tian, Feng and Zheng, Yefeng",2024.0,,,,arXiv preprint arXiv:2402.02803,Large Language Model Distilling Medication Recommendation Model,Large Language Model Distilling Medication Recommendation Model,http://arxiv.org/pdf/2402.02803v2,"The recommendation of medication is a vital aspect of intelligent healthcare systems, as it involves prescribing the most suitable drugs based on a patient's specific health needs. Unfortunately, many sophisticated models currently in use tend to overlook the nuanced semantics of medical data, while only relying heavily on identities. Furthermore, these models face significant challenges in handling cases involving patients who are visiting the hospital for the first time, as they lack prior prescription histories to draw upon. To tackle these issues, we harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs). Our research aims to transform existing medication recommendation methodologies using LLMs. In this paper, we introduce a novel approach called Large Language Model Distilling Medication Recommendation (LEADER). We begin by creating appropriate prompt templates that enable LLMs to suggest medications effectively. However, the straightforward integration of LLMs into recommender systems leads to an out-of-corpus issue specific to drugs. We handle it by adapting the LLMs with a novel output layer and a refined tuning loss function. Although LLM-based models exhibit remarkable capabilities, they are plagued by high computational costs during inference, which is impractical for the healthcare sector. To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model. Extensive experiments conducted on two real-world datasets, MIMIC-III and MIMIC-IV, demonstrate that our proposed model not only delivers effective results but also is efficient. To ease the reproducibility of our experiments, we release the implementation code online." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,wang2024llm4msr,\cite{wang2024llm4msr},LLM4MSR: An LLM-Enhanced Paradigm for Multi-Scenario Recommendation,http://arxiv.org/abs/2406.12529v1,"As the demand for more personalized recommendation grows and a dramatic boom in commercial scenarios arises, the study on multi-scenario recommendation (MSR) has attracted much attention, which uses the data from all scenarios to simultaneously improve their recommendation performance. However, existing methods tend to integrate insufficient scenario knowledge and neglect learning personalized cross-scenario preferences, thus leading to suboptimal performance and inadequate interpretability. Meanwhile, though large language model (LLM) has shown great capability of reasoning and capturing semantic information, the high inference latency and high computation cost of tuning hinder its implementation in industrial recommender systems. To fill these gaps, we propose an effective efficient interpretable LLM-enhanced paradigm LLM4MSR in this work. Specifically, we first leverage LLM to uncover multi-level knowledge including scenario correlations and users' cross-scenario interests from the designed scenario- and user-level prompt without fine-tuning the LLM, then adopt hierarchical meta networks to generate multi-level meta layers to explicitly improves the scenario-aware and personalized recommendation capability. Our experiments on KuaiSAR-small, KuaiSAR, and Amazon datasets validate two significant advantages of LLM4MSR: (i) the effectiveness and compatibility with different multi-scenario backbone models (achieving 1.5%, 1%, and 40% AUC improvement on three datasets), (ii) high efficiency and deployability on industrial recommender systems, and (iii) improved interpretability. The implemented code and data is available to ease reproduction.",True,True,"Wang, Yuhao and Wang, Yichao and Fu, Zichuan and Li, Xiangyang and Wang, Wanyu and Ye, Yuyang and Zhao, Xiangyu and Guo, Huifeng and Tang, Ruiming",2024.0,,,,,LLM4MSR: An LLM-Enhanced Paradigm for Multi-Scenario Recommendation,An LLM-Enhanced Paradigm for Multi-Scenario Recommendation,https://arxiv.org/html/2406.12529v1,"Specifically, we first leverage LLM to uncover multi-level knowledge including scenario correlations and users’ cross-scenario interests from the designed scenario- and user-level prompt without fine-tuning the LLM, then adopt hierarchical meta networks to generate multi-level meta layers to explicitly improves the scenario-aware and personalized recommendation capability. (ii) Users’ personalized preferences across scenarios tend to be ignored, since most MSR models merely rely on different parameter sharing patterns in multi-task learning (Sheng et al., 2021; Wang et al., 2023a) and the collaborative signal learned in conventional RSs to conduct recommendation. To this end, we propose an LLM-based paradigm to enhance conventional multi-scenario recommendation, which leverages LLM and hierarchical meta networks to explicitly improve the performance of the backbone model on all scenarios." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,fu2023unified,\cite{fu2023unified},A unified framework for multi-domain ctr prediction via large language models,,,True,False,"Fu, Zichuan and Li, Xiangyang and Wu, Chuhan and Wang, Yichao and Dong, Kuicai and Zhao, Xiangyu and Zhao, Mengchen and Guo, Huifeng and Tang, Ruiming",2023.0,,,,ACM Transactions on Information Systems,A unified framework for multi-domain ctr prediction via large language models,[2312.10743] A Unified Framework for Multi-Domain CTR Prediction ...,https://arxiv.org/abs/2312.10743,"[2312.10743] A Unified Framework for Multi-Domain CTR Prediction via Large Language Models Title:A Unified Framework for Multi-Domain CTR Prediction via Large Language Models View a PDF of the paper titled A Unified Framework for Multi-Domain CTR Prediction via Large Language Models, by Zichuan Fu and 8 other authors View a PDF of the paper titled A Unified Framework for Multi-Domain CTR Prediction via Large Language Models, by Zichuan Fu and 8 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle " "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,zhao2023survey,\cite{zhao2023survey},Large Language Models: A Survey,http://arxiv.org/abs/2402.06196v3,"Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, since the release of ChatGPT in November 2022. LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data, as predicted by scaling laws \cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while very recent, is evolving rapidly in many different ways. In this paper, we review some of the most prominent LLMs, including three popular LLM families (GPT, LLaMA, PaLM), and discuss their characteristics, contributions and limitations. We also give an overview of techniques developed to build, and augment LLMs. We then survey popular datasets prepared for LLM training, fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs on a set of representative benchmarks. Finally, we conclude the paper by discussing open challenges and future research directions.",True,True,"Zhao, Wayne Xin and Zhou, Kun and Li, Junyi and Tang, Tianyi and Wang, Xiaolei and Hou, Yupeng and Min, Yingqian and Zhang, Beichen and Zhang, Junjie and Dong, Zican and others",2023.0,,,,arXiv preprint arXiv:2303.18223,Large Language Models: A Survey,Large Language Models: A Survey,http://arxiv.org/pdf/2402.06196v3,"Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, since the release of ChatGPT in November 2022. LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data, as predicted by scaling laws \cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while very recent, is evolving rapidly in many different ways. In this paper, we review some of the most prominent LLMs, including three popular LLM families (GPT, LLaMA, PaLM), and discuss their characteristics, contributions and limitations. We also give an overview of techniques developed to build, and augment LLMs. We then survey popular datasets prepared for LLM training, fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs on a set of representative benchmarks. Finally, we conclude the paper by discussing open challenges and future research directions." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,long2024got4rec,\cite{long2024got4rec},GOT4Rec: Graph of Thoughts for Sequential Recommendation,http://arxiv.org/abs/2411.14922v2,"With their vast open-world knowledge and reasoning abilities, large language models (LLMs) have become a promising tool for sequential recommendation. Researchers have explored various methods to harness these capabilities, but most existing approaches rely on simple input-output prompting, failing to effectively bridge the gap between LLMs' general knowledge and the specific needs of recommendation tasks. While reasoning strategies like chain-of-thought (CoT) have been introduced to enhance performance, they often produce inaccurate recommendations due to underutilized user preference information and insufficient reasoning depth. To address these challenges, we propose GOT4Rec, a novel sequential recommendation method leveraging the graph of thoughts (GoT) reasoning strategy. Our method focuses on three key types of information in user histories: short-term interests, long-term interests and collaborative information from other users. It enables LLMs to reason independently and generate recommendations, subsequently aggregating results to derive final items. This method allows LLMs, with enhanced reasoning capabilities, to better utilize the user sequence information, producing more accurate recommendations and comprehensive explanations. Extensive experiments on real-world datasets demonstrate the effectiveness of GOT4Rec, outperforming existing state-of-the-art baselines with an average improvement of 37.11%. Our code is available at https://anonymous.4open.science/r/GOT4Rec.",True,True,"Long, Zewen and Wang, Liang and Wu, Shu and Liu, Qiang",2024.0,,,,arXiv preprint arXiv:2411.14922,GOT4Rec: Graph of Thoughts for Sequential Recommendation,GOT4Rec: Graph of Thoughts for Sequential Recommendation,http://arxiv.org/pdf/2411.14922v2,"With their vast open-world knowledge and reasoning abilities, large language models (LLMs) have become a promising tool for sequential recommendation. Researchers have explored various methods to harness these capabilities, but most existing approaches rely on simple input-output prompting, failing to effectively bridge the gap between LLMs' general knowledge and the specific needs of recommendation tasks. While reasoning strategies like chain-of-thought (CoT) have been introduced to enhance performance, they often produce inaccurate recommendations due to underutilized user preference information and insufficient reasoning depth. To address these challenges, we propose GOT4Rec, a novel sequential recommendation method leveraging the graph of thoughts (GoT) reasoning strategy. Our method focuses on three key types of information in user histories: short-term interests, long-term interests and collaborative information from other users. It enables LLMs to reason independently and generate recommendations, subsequently aggregating results to derive final items. This method allows LLMs, with enhanced reasoning capabilities, to better utilize the user sequence information, producing more accurate recommendations and comprehensive explanations. Extensive experiments on real-world datasets demonstrate the effectiveness of GOT4Rec, outperforming existing state-of-the-art baselines with an average improvement of 37.11%. Our code is available at https://anonymous.4open.science/r/GOT4Rec." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,bao2023tallrec,\cite{bao2023tallrec},"TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation",http://arxiv.org/abs/2305.00447v3,"Large Language Models (LLMs) have demonstrated remarkable performance across diverse domains, thereby prompting researchers to explore their potential for use in recommendation systems. Initial attempts have leveraged the exceptional capabilities of LLMs, such as rich knowledge and strong generalization through In-context Learning, which involves phrasing the recommendation task as prompts. Nevertheless, the performance of LLMs in recommendation tasks remains suboptimal due to a substantial disparity between the training tasks for LLMs and recommendation tasks, as well as inadequate recommendation data during pre-training. To bridge the gap, we consider building a Large Recommendation Language Model by tunning LLMs with recommendation data. To this end, we propose an efficient and effective Tuning framework for Aligning LLMs with Recommendation, namely TALLRec. We have demonstrated that the proposed TALLRec framework can significantly enhance the recommendation capabilities of LLMs in the movie and book domains, even with a limited dataset of fewer than 100 samples. Additionally, the proposed framework is highly efficient and can be executed on a single RTX 3090 with LLaMA-7B. Furthermore, the fine-tuned LLM exhibits robust cross-domain generalization. Our code and data are available at https://github.com/SAI990323/TALLRec.",True,True,"Bao, Keqin and Zhang, Jizhi and Zhang, Yang and Wang, Wenjie and Feng, Fuli and He, Xiangnan",2023.0,,,,,"TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation",Multi-view Intent Learning and Alignment with Large Language ...,https://dl.acm.org/doi/10.1145/3719344,Tallrec: An effective and efficient tuning framework to align large language model with recommendation ... Online AM: 08 April 2025. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,chen2024softmax,\cite{chen2024softmax},On Softmax Direct Preference Optimization for Recommendation,http://arxiv.org/abs/2406.09215v3,"Recommender systems aim to predict personalized rankings based on user preference data. With the rise of Language Models (LMs), LM-based recommenders have been widely explored due to their extensive world knowledge and powerful reasoning abilities. Most of the LM-based recommenders convert historical interactions into language prompts, pairing with a positive item as the target response and fine-tuning LM with a language modeling loss. However, the current objective fails to fully leverage preference data and is not optimized for personalized ranking tasks, which hinders the performance of LM-based recommenders. Inspired by the current advancement of Direct Preference Optimization (DPO) in human preference alignment and the success of softmax loss in recommendations, we propose Softmax-DPO (S-DPO) to instill ranking information into the LM to help LM-based recommenders distinguish preferred items from negatives, rather than solely focusing on positives. Specifically, we incorporate multiple negatives in user preference data and devise an alternative version of DPO loss tailored for LM-based recommenders, which is extended from the traditional full-ranking Plackett-Luce (PL) model to partial rankings and connected to softmax sampling strategies. Theoretically, we bridge S-DPO with the softmax loss over negative sampling and find that it has an inherent benefit of mining hard negatives, which assures its exceptional capabilities in recommendation tasks. Empirically, extensive experiments conducted on three real-world datasets demonstrate the superiority of S-DPO to effectively model user preference and further boost recommendation performance while providing better rewards for preferred items. Our codes are available at https://github.com/chenyuxin1999/S-DPO.",True,True,"Chen, Yuxin and Tan, Junfei and Zhang, An and Yang, Zhengyi and Sheng, Leheng and Zhang, Enzhi and Wang, Xiang and Chua, Tat-Seng",2024.0,,,,arXiv preprint arXiv:2406.09215,On Softmax Direct Preference Optimization for Recommendation,On Softmax Direct Preference Optimization for Recommendation,http://arxiv.org/pdf/2406.09215v3,"Recommender systems aim to predict personalized rankings based on user preference data. With the rise of Language Models (LMs), LM-based recommenders have been widely explored due to their extensive world knowledge and powerful reasoning abilities. Most of the LM-based recommenders convert historical interactions into language prompts, pairing with a positive item as the target response and fine-tuning LM with a language modeling loss. However, the current objective fails to fully leverage preference data and is not optimized for personalized ranking tasks, which hinders the performance of LM-based recommenders. Inspired by the current advancement of Direct Preference Optimization (DPO) in human preference alignment and the success of softmax loss in recommendations, we propose Softmax-DPO (S-DPO) to instill ranking information into the LM to help LM-based recommenders distinguish preferred items from negatives, rather than solely focusing on positives. Specifically, we incorporate multiple negatives in user preference data and devise an alternative version of DPO loss tailored for LM-based recommenders, which is extended from the traditional full-ranking Plackett-Luce (PL) model to partial rankings and connected to softmax sampling strategies. Theoretically, we bridge S-DPO with the softmax loss over negative sampling and find that it has an inherent benefit of mining hard negatives, which assures its exceptional capabilities in recommendation tasks. Empirically, extensive experiments conducted on three real-world datasets demonstrate the superiority of S-DPO to effectively model user preference and further boost recommendation performance while providing better rewards for preferred items. Our codes are available at https://github.com/chenyuxin1999/S-DPO." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2024llmers,\cite{liu2024llmers},"Large language model enhanced recommender systems: Taxonomy, trend, application and future",,,True,False,"Liu, Qidong and Zhao, Xiangyu and Wang, Yuhao and Wang, Yejing and Zhang, Zijian and Sun, Yuqi and Li, Xiang and Wang, Maolin and Jia, Pengyue and Chen, Chong and others",2024.0,,,,arXiv preprint arXiv:2412.13432,"Large language model enhanced recommender systems: Taxonomy, trend, application and future",Large Language Model Enhanced Recommender Systems,https://arxiv.org/abs/2412.13432,"**arXiv:2412.13432** (cs) View a PDF of the paper titled Large Language Model Enhanced Recommender Systems: A Survey, by Qidong Liu and 10 other authors (or arXiv:2412.13432v3 [cs.IR] for this version) View a PDF of the paper titled Large Language Model Enhanced Recommender Systems: A Survey, by Qidong Liu and 10 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,hu2024enhancing,\cite{hu2024enhancing},Enhancing sequential recommendation via llm-based semantic embedding learning,,,True,False,"Hu, Jun and Xia, Wenwen and Zhang, Xiaolu and Fu, Chilin and Wu, Weichang and Huan, Zhaoxin and Li, Ang and Tang, Zuoli and Zhou, Jun",2024.0,,,,,Enhancing sequential recommendation via llm-based semantic embedding learning,Enhancing Sequential Recommendation via LLM-based Semantic...,https://openreview.net/forum?id=k69pbhRWPD,"Enhancing Sequential Recommendation via LLM-based Semantic Embedding Learning | OpenReview Enhancing Sequential Recommendation via LLM-based Semantic Embedding Learning Specifically, directly extracting representations from an LLM based on items' textual features and feeding them into a sequential model hold no guarantee that the semantic information of texts could be preserved in these representations. Additionally, concatenating textual descriptions of all items in an item sequence into a long text and feeding it into an LLM for recommendation results in lengthy token sequences, which largely diminishes the practical efficiency.In this paper, we introduce SAID, a framework that utilizes LLMs to explicitly learn Semantically Aligned item ID embeddings based on texts." "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2024practice,\cite{liu2024practice},A Practice-Friendly Two-Stage LLM-Enhanced Paradigm in Sequential Recommendation,,,True,False,"Liu, Dugang and Xian, Shenxian and Lin, Xiaolin and Zhang, Xiaolian and Zhu, Hong and Fang, Yuan and Chen, Zhen and Ming, Zhong",2024.0,,,,arXiv preprint arXiv:2406.00333,A Practice-Friendly Two-Stage LLM-Enhanced Paradigm in Sequential Recommendation,A Practice-Friendly Two-Stage LLM-Enhanced Paradigm in ...,https://openreview.net/pdf?id=fGgcefD1su,ABSTRACT. The training paradigm integrating large language models (LLM) is gradually reshaping sequential recommender systems (SRS) and has. "Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation",2504.18383v1,liu2024llm,\cite{liu2024llm},Llm-esr: Large language models enhancement for long-tailed sequential recommendation,,,True,False,"Liu, Qidong and Wu, Xian and Wang, Yejing and Zhang, Zijian and Tian, Feng and Zheng, Yefeng and Zhao, Xiangyu",2024.0,,,,Advances in Neural Information Processing Systems,Llm-esr: Large language models enhancement for long-tailed sequential recommendation,[PDF] LLM-ESR: Large Language Models Enhancement for Long-tailed ...,https://proceedings.neurips.cc/paper_files/paper/2024/file/2f0728449cb3150189d765fc87afc913-Paper-Conference.pdf,"Firstly, we derive the semantic embeddings of items and users by encoding prompt texts from LLMs. Since these embeddings can be cached in advance, our integration does not impose any extra inference burden from LLMs. To tackle the long-tail item challenge, we devise a dual-view modeling framework that combines semantic and collaborative information. historical interactions Retrieve Dual-view Modeling Retrieval Augmented Self-Distillation Semantic User Base Frozen Updated Item Semantic Embedding Collaborative Embedding LLMs User Embedding Figure 2: The overview of the proposed LLM-ESR framework. The contributions of this paper are as follows: • We propose a large language models enhancement framework, which can alleviate both long-tail user and item challenges for SRS by introducing semantic information from LLMs." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,de2020autoregressive,\cite{de2020autoregressive},Autoregressive Entity Retrieval,http://arxiv.org/abs/2010.00904v3,"Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. Current approaches can be understood as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach has several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion. This mitigates the aforementioned technical issues since: (i) the autoregressive formulation directly captures relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the softmax loss is computed without subsampling negative data. We experiment with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their names. Code and pre-trained models at https://github.com/facebookresearch/GENRE.",True,True,"De Cao, Nicola and Izacard, Gautier and Riedel, Sebastian and Petroni, Fabio",2020.0,,,,arXiv preprint arXiv:2010.00904,Autoregressive Entity Retrieval,Autoregressive Entity Retrieval,http://arxiv.org/pdf/2010.00904v3,"Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. Current approaches can be understood as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach has several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion. This mitigates the aforementioned technical issues since: (i) the autoregressive formulation directly captures relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the softmax loss is computed without subsampling negative data. We experiment with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their names. Code and pre-trained models at https://github.com/facebookresearch/GENRE." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,tay2022transformer,\cite{tay2022transformer},Transformer Memory as a Differentiable Search Index,http://arxiv.org/abs/2202.06991v3,"In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.",True,True,"Tay, Yi and Tran, Vinh and Dehghani, Mostafa and Ni, Jianmo and Bahri, Dara and Mehta, Harsh and Qin, Zhen and Hui, Kai and Zhao, Zhe and Gupta, Jai and others",2022.0,,,,Advances in Neural Information Processing Systems,Transformer Memory as a Differentiable Search Index,Transformer Memory as a Differentiable Search Index,http://arxiv.org/pdf/2202.06991v3,"In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,zhou2022dynamicretriever,\cite{zhou2022dynamicretriever},"DynamicRetriever: A Pre-training Model-based IR System with Neither Sparse nor Dense Index",http://arxiv.org/abs/2203.00537v1,"Web search provides a promising way for people to obtain information and has been extensively studied. With the surgence of deep learning and large-scale pre-training techniques, various neural information retrieval models are proposed and they have demonstrated the power for improving search (especially, the ranking) quality. All these existing search methods follow a common paradigm, i.e. index-retrieve-rerank, where they first build an index of all documents based on document terms (i.e., sparse inverted index) or representation vectors (i.e., dense vector index), then retrieve and rerank retrieved documents based on similarity between the query and documents via ranking models. In this paper, we explore a new paradigm of information retrieval with neither sparse nor dense index but only a model. Specifically, we propose a pre-training model-based IR system called DynamicRetriever. As for this system, the training stage embeds the token-level and document-level information (especially, document identifiers) of the corpus into the model parameters, then the inference stage directly generates document identifiers for a given query. Compared with existing search methods, the model-based IR system has two advantages: i) it parameterizes the traditional static index with a pre-training model, which converts the document semantic mapping into a dynamic and updatable process; ii) with separate document identifiers, it captures both the term-level and document-level information for each document. Extensive experiments conducted on the public search benchmark MS MARCO verify the effectiveness and potential of our proposed new paradigm for information retrieval.",True,True,"Zhou, Yujia and Yao, Jing and Dou, Zhicheng and Wu, Ledell and Wen, Ji-Rong",2022.0,,,,arXiv preprint arXiv:2203.00537,"DynamicRetriever: A Pre-training Model-based IR System with Neither Sparse nor Dense Index",[2203.00537] DynamicRetriever: A Pre-training Model-based IR ...,https://arxiv.org/abs/2203.00537,"View a PDF of the paper titled DynamicRetriever: A Pre-training Model-based IR System with Neither Sparse nor Dense Index, by Yujia Zhou and 4 other authors In this paper, we explore a new paradigm of information retrieval with neither sparse nor dense index but only a model. View a PDF of the paper titled DynamicRetriever: A Pre-training Model-based IR System with Neither Sparse nor Dense Index, by Yujia Zhou and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle " Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,wang2022neural,\cite{wang2022neural},A Neural Corpus Indexer for Document Retrieval,http://arxiv.org/abs/2206.02743v3,"Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method.",True,True,"Wang, Yujing and Hou, Yingyan and Wang, Haonan and Miao, Ziming and Wu, Shibin and Chen, Qi and Xia, Yuqing and Chi, Chengmin and Zhao, Guoshuai and Liu, Zheng and others",2022.0,,,,Advances in Neural Information Processing Systems,A Neural Corpus Indexer for Document Retrieval,A Neural Corpus Indexer for Document Retrieval,http://arxiv.org/pdf/2206.02743v3,"Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,zhou2022ultron,\cite{zhou2022ultron},Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer,http://arxiv.org/abs/2208.09257v1,"Document retrieval has been extensively studied within the index-retrieve framework for decades, which has withstood the test of time. Unfortunately, such a pipelined framework limits the optimization of the final retrieval quality, because indexing and retrieving are separated stages that can not be jointly optimized in an end-to-end manner. In order to unify these two stages, we explore a model-based indexer for document retrieval. Concretely, we propose Ultron, which encodes the knowledge of all documents into the model and aims to directly retrieve relevant documents end-to-end. For the model-based indexer, how to represent docids and how to train the model are two main issues to be explored. Existing solutions suffer from semantically deficient docids and limited supervised data. To tackle these two problems, first, we devise two types of docids that are richer in semantics and easier for model inference. In addition, we propose a three-stage training workflow to capture more knowledge contained in the corpus and associations between queries and docids. Experiments on two public datasets demonstrate the superiority of Ultron over advanced baselines for document retrieval.",True,True,"Zhou, Yujia and Yao, Jing and Dou, Zhicheng and Wu, Ledell Yu and Zhang, Peitian and rong Wen, Ji",2022.0,,,,,Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer,"smallporridge/WebUltron: The source code of the paper "" ...",https://github.com/smallporridge/WebUltron,"This is the official repo for ""Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer"". Quick Tour. In this work, we propose Ultron, an ultimate" Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,zeng2024scalable,\cite{zeng2024scalable},Scalable and Effective Generative Information Retrieval,http://arxiv.org/abs/2311.09134v1,"Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR- an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To address this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5% MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models.",True,True,"Zeng, Hansi and Luo, Chen and Jin, Bowen and Sarwar, Sheikh Muhammad and Wei, Tianxin and Zamani, Hamed",2024.0,,,,,Scalable and Effective Generative Information Retrieval,Scalable and Effective Generative Information Retrieval,http://arxiv.org/pdf/2311.09134v1,"Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR- an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To address this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5% MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,sun2024learning,\cite{sun2024learning},Learning to Tokenize for Generative Retrieval,http://arxiv.org/abs/2304.04171v1,"Conventional document retrieval techniques are mainly based on the index-retrieve paradigm. It is challenging to optimize pipelines based on this paradigm in an end-to-end manner. As an alternative, generative retrieval represents documents as identifiers (docid) and retrieves documents by generating docids, enabling end-to-end modeling of document retrieval tasks. However, it is an open question how one should define the document identifiers. Current approaches to the task of defining document identifiers rely on fixed rule-based docids, such as the title of a document or the result of clustering BERT embeddings, which often fail to capture the complete semantic information of a document. We propose GenRet, a document tokenization learning method to address the challenge of defining document identifiers for generative retrieval. GenRet learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach. Three components are included in GenRet: (i) a tokenization model that produces docids for documents; (ii) a reconstruction model that learns to reconstruct a document based on a docid; and (iii) a sequence-to-sequence retrieval model that generates relevant document identifiers directly for a designated query. By using an auto-encoding framework, GenRet learns semantic docids in a fully end-to-end manner. We also develop a progressive training scheme to capture the autoregressive nature of docids and to stabilize training. We conduct experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the effectiveness of GenRet. GenRet establishes the new state-of-the-art on the NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet can achieve significant improvements on the unseen documents. GenRet also outperforms comparable baselines on MS MARCO and BEIR, demonstrating the method's generalizability.",True,True,"Sun, Weiwei and Yan, Lingyong and Chen, Zheng and Wang, Shuaiqiang and Zhu, Haichao and Ren, Pengjie and Chen, Zhumin and Yin, Dawei and Rijke, Maarten and Ren, Zhaochun",2024.0,,,,Advances in Neural Information Processing Systems,Learning to Tokenize for Generative Retrieval,Learning to Tokenize for Generative Retrieval,http://arxiv.org/pdf/2304.04171v1,"Conventional document retrieval techniques are mainly based on the index-retrieve paradigm. It is challenging to optimize pipelines based on this paradigm in an end-to-end manner. As an alternative, generative retrieval represents documents as identifiers (docid) and retrieves documents by generating docids, enabling end-to-end modeling of document retrieval tasks. However, it is an open question how one should define the document identifiers. Current approaches to the task of defining document identifiers rely on fixed rule-based docids, such as the title of a document or the result of clustering BERT embeddings, which often fail to capture the complete semantic information of a document. We propose GenRet, a document tokenization learning method to address the challenge of defining document identifiers for generative retrieval. GenRet learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach. Three components are included in GenRet: (i) a tokenization model that produces docids for documents; (ii) a reconstruction model that learns to reconstruct a document based on a docid; and (iii) a sequence-to-sequence retrieval model that generates relevant document identifiers directly for a designated query. By using an auto-encoding framework, GenRet learns semantic docids in a fully end-to-end manner. We also develop a progressive training scheme to capture the autoregressive nature of docids and to stabilize training. We conduct experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the effectiveness of GenRet. GenRet establishes the new state-of-the-art on the NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet can achieve significant improvements on the unseen documents. GenRet also outperforms comparable baselines on MS MARCO and BEIR, demonstrating the method's generalizability." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,chen2022gere,\cite{chen2022gere},GERE: Generative Evidence Retrieval for Fact Verification,http://arxiv.org/abs/2204.05511v3,"Fact verification (FV) is a challenging task which aims to verify a claim using multiple evidential sentences from trustworthy corpora, e.g., Wikipedia. Most existing approaches follow a three-step pipeline framework, including document retrieval, sentence retrieval and claim verification. High-quality evidences provided by the first two steps are the foundation of the effective reasoning in the last step. Despite being important, high-quality evidences are rarely studied by existing works for FV, which often adopt the off-the-shelf models to retrieve relevant documents and sentences in an ""index-retrieve-then-rank"" fashion. This classical approach has clear drawbacks as follows: i) a large document index as well as a complicated search process is required, leading to considerable memory and computational overhead; ii) independent scoring paradigms fail to capture the interactions among documents and sentences in ranking; iii) a fixed number of sentences are selected to form the final evidence set. In this work, we propose GERE, the first system that retrieves evidences in a generative fashion, i.e., generating the document titles as well as evidence sentence identifiers. This enables us to mitigate the aforementioned technical issues since: i) the memory and computational cost is greatly reduced because the document index is eliminated and the heavy ranking process is replaced by a light generative process; ii) the dependency between documents and that between sentences could be captured via sequential generation process; iii) the generative formulation allows us to dynamically select a precise set of relevant evidences for each claim. The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines, with both time-efficiency and memory-efficiency.",True,True,"Chen, Jiangui and Zhang, Ruqing and Guo, Jiafeng and Fan, Yixing and Cheng, Xueqi",2022.0,,,,,GERE: Generative Evidence Retrieval for Fact Verification,GERE: Generative Evidence Retrieval for Fact Verification,http://arxiv.org/pdf/2204.05511v3,"Fact verification (FV) is a challenging task which aims to verify a claim using multiple evidential sentences from trustworthy corpora, e.g., Wikipedia. Most existing approaches follow a three-step pipeline framework, including document retrieval, sentence retrieval and claim verification. High-quality evidences provided by the first two steps are the foundation of the effective reasoning in the last step. Despite being important, high-quality evidences are rarely studied by existing works for FV, which often adopt the off-the-shelf models to retrieve relevant documents and sentences in an ""index-retrieve-then-rank"" fashion. This classical approach has clear drawbacks as follows: i) a large document index as well as a complicated search process is required, leading to considerable memory and computational overhead; ii) independent scoring paradigms fail to capture the interactions among documents and sentences in ranking; iii) a fixed number of sentences are selected to form the final evidence set. In this work, we propose GERE, the first system that retrieves evidences in a generative fashion, i.e., generating the document titles as well as evidence sentence identifiers. This enables us to mitigate the aforementioned technical issues since: i) the memory and computational cost is greatly reduced because the document index is eliminated and the heavy ranking process is replaced by a light generative process; ii) the dependency between documents and that between sentences could be captured via sequential generation process; iii) the generative formulation allows us to dynamically select a precise set of relevant evidences for each claim. The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines, with both time-efficiency and memory-efficiency." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,chen2022corpusbrain,\cite{chen2022corpusbrain},"CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks",http://arxiv.org/abs/2208.07652v1,"Knowledge-intensive language tasks (KILT) usually require a large body of information to provide correct answers. A popular paradigm to solve this problem is to combine a search system with a machine reader, where the former retrieves supporting evidences and the latter examines them to produce answers. Recently, the reader component has witnessed significant advances with the help of large-scale pre-trained generative models. Meanwhile most existing solutions in the search component rely on the traditional ``index-retrieve-then-rank'' pipeline, which suffers from large memory footprint and difficulty in end-to-end optimization. Inspired by recent efforts in constructing model-based IR models, we propose to replace the traditional multi-step search pipeline with a novel single-step generative model, which can dramatically simplify the search process and be optimized in an end-to-end manner. We show that a strong generative retrieval model can be learned with a set of adequately designed pre-training tasks, and be adopted to improve a variety of downstream KILT tasks with further fine-tuning. We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index. Empirical results show that CorpusBrain can significantly outperform strong baselines for the retrieval task on the KILT benchmark and establish new state-of-the-art downstream performances. We also show that CorpusBrain works well under zero- and low-resource settings.",True,True,"Chen, Jiangui and Zhang, Ruqing and Guo, Jiafeng and Liu, Yiqun and Fan, Yixing and Cheng, Xueqi",2022.0,,,,,"CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks",[2208.07652] CorpusBrain: Pre-train a Generative Retrieval Model ...,https://arxiv.org/abs/2208.07652,"[2208.07652] CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks Title:CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks View a PDF of the paper titled CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks, by Jiangui Chen and 5 other authors View a PDF of the paper titled CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks, by Jiangui Chen and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle " Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,tang2024generative,\cite{tang2024generative},Generative Retrieval Meets Multi-Graded Relevance,http://arxiv.org/abs/2409.18409v1,"Generative retrieval represents a novel approach to information retrieval. It uses an encoder-decoder architecture to directly produce relevant document identifiers (docids) for queries. While this method offers benefits, current approaches are limited to scenarios with binary relevance data, overlooking the potential for documents to have multi-graded relevance. Extending generative retrieval to accommodate multi-graded relevance poses challenges, including the need to reconcile likelihood probabilities for docid pairs and the possibility of multiple relevant documents sharing the same identifier. To address these challenges, we introduce a framework called GRaded Generative Retrieval (GR$^2$). GR$^2$ focuses on two key components: ensuring relevant and distinct identifiers, and implementing multi-graded constrained contrastive training. First, we create identifiers that are both semantically relevant and sufficiently distinct to represent individual documents effectively. This is achieved by jointly optimizing the relevance and distinctness of docids through a combination of docid generation and autoencoder models. Second, we incorporate information about the relationship between relevance grades to guide the training process. We use a constrained contrastive training strategy to bring the representations of queries and the identifiers of their relevant documents closer together, based on their respective relevance grades. Extensive experiments on datasets with both multi-graded and binary relevance demonstrate the effectiveness of GR$^2$.",True,True,"Tang, Yubao and Zhang, Ruqing and Guo, Jiafeng and de Rijke, Maarten and Chen, Wei and Cheng, Xueqi",2024.0,,,,arXiv preprint arXiv:2409.18409,Generative Retrieval Meets Multi-Graded Relevance,Generative Retrieval Meets Multi-Graded Relevance,https://proceedings.neurips.cc/paper_files/paper/2024/hash/853e781cb2af58956ed5c89aa59da3fc-Abstract-Conference.html,"Generative retrieval represents a novel approach to information retrieval, utilizing an encoder-decoder architecture to directly produce relevant document" Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,li2023multiview,\cite{li2023multiview},Multiview Identifiers Enhanced Generative Retrieval,http://arxiv.org/abs/2305.16675v1,"Instead of simply matching a query to pre-existing passages, generative retrieval generates identifier strings of passages as the retrieval target. At a cost, the identifier must be distinctive enough to represent a passage. Current approaches use either a numeric ID or a text piece (such as a title or substrings) as the identifier. However, these identifiers cannot cover a passage's content well. As such, we are motivated to propose a new type of identifier, synthetic identifiers, that are generated based on the content of a passage and could integrate contextualized information that text pieces lack. Furthermore, we simultaneously consider multiview identifiers, including synthetic identifiers, titles, and substrings. These views of identifiers complement each other and facilitate the holistic ranking of passages from multiple perspectives. We conduct a series of experiments on three public datasets, and the results indicate that our proposed approach performs the best in generative retrieval, demonstrating its effectiveness and robustness.",True,True,"Li, Yongqi and Yang, Nan and Wang, Liang and Wei, Furu and Li, Wenjie",2023.0,,,,arXiv preprint arXiv:2305.16675,Multiview Identifiers Enhanced Generative Retrieval,Multiview Identifiers Enhanced Generative Retrieval,http://arxiv.org/pdf/2305.16675v1,"Instead of simply matching a query to pre-existing passages, generative retrieval generates identifier strings of passages as the retrieval target. At a cost, the identifier must be distinctive enough to represent a passage. Current approaches use either a numeric ID or a text piece (such as a title or substrings) as the identifier. However, these identifiers cannot cover a passage's content well. As such, we are motivated to propose a new type of identifier, synthetic identifiers, that are generated based on the content of a passage and could integrate contextualized information that text pieces lack. Furthermore, we simultaneously consider multiview identifiers, including synthetic identifiers, titles, and substrings. These views of identifiers complement each other and facilitate the holistic ranking of passages from multiple perspectives. We conduct a series of experiments on three public datasets, and the results indicate that our proposed approach performs the best in generative retrieval, demonstrating its effectiveness and robustness." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,bevilacqua2022autoregressive,\cite{bevilacqua2022autoregressive},"Autoregressive Search Engines: Generating Substrings as Document Identifiers",http://arxiv.org/abs/2204.10628v1,"Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive language models are emerging as the de-facto standard for generating answers, with newer and more powerful systems emerging at an astonishing pace. In this paper we argue that all this (and future) progress can be directly applied to the retrieval problem with minimal intervention to the models' architecture. Previous work has explored ways to partition the search space into hierarchical structures and retrieve documents by autoregressively generating their unique identifier. In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers. This setup allows us to use an autoregressive model to generate and score distinctive ngrams, that are then mapped to full passages through an efficient data structure. Empirically, we show this not only outperforms prior autoregressive approaches but also leads to an average improvement of at least 10 points over more established retrieval solutions for passage-level retrieval on the KILT benchmark, establishing new state-of-the-art downstream performance on some datasets, while using a considerably lighter memory footprint than competing systems. Code and pre-trained models at https://github.com/facebookresearch/SEAL.",True,True,"Bevilacqua, Michele and Ottaviano, Giuseppe and Lewis, Patrick and Yih, Scott and Riedel, Sebastian and Petroni, Fabio",2022.0,,,,Advances in Neural Information Processing Systems,"Autoregressive Search Engines: Generating Substrings as Document Identifiers",[PDF] Autoregressive Search Engines: Generating Substrings as ...,https://proceedings.neurips.cc/paper_files/paper/2022/file/cd88d62a2063fdaf7ce6f9068fb15dcd-Paper-Conference.pdf,"One way to approach retrieval with autoregressive models makes use of unique identifiers, i.e., string pointers to documents that are in some way easier to" Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,mehta2022dsi++,\cite{mehta2022dsi++},DSI++: Updating transformer memory with new documents,,,True,False,"Mehta, Sanket Vaibhav and Gupta, Jai and Tay, Yi and Dehghani, Mostafa and Tran, Vinh Q and Rao, Jinfeng and Najork, Marc and Strubell, Emma and Metzler, Donald",2022.0,,,,arXiv preprint arXiv:2212.09744,DSI++: Updating transformer memory with new documents,DSI++: Updating Transformer Memory with New Documents,http://arxiv.org/pdf/2212.09744v3,"Differentiable Search Indices (DSIs) encode a corpus of documents in model parameters and use the same model to answer user queries directly. Despite the strong performance of DSI models, deploying them in situations where the corpus changes over time is computationally expensive because reindexing the corpus requires re-training the model. In this work, we introduce DSI++, a continual learning challenge for DSI to incrementally index new documents while being able to answer queries related to both previously and newly indexed documents. Across different model scales and document identifier representations, we show that continual indexing of new documents leads to considerable forgetting of previously indexed documents. We also hypothesize and verify that the model experiences forgetting events during training, leading to unstable learning. To mitigate these issues, we investigate two approaches. The first focuses on modifying the training dynamics. Flatter minima implicitly alleviate forgetting, so we optimize for flatter loss basins and show that the model stably memorizes more documents ($+12\%$). Next, we introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task. Extensive experiments on novel continual indexing benchmarks based on Natural Questions (NQ) and MS MARCO demonstrate that our proposed solution mitigates forgetting significantly. Concretely, it improves the average Hits@10 by $+21.1\%$ over competitive baselines for NQ and requires $6$ times fewer model updates compared to re-training the DSI model for incrementally indexing five corpora in a sequence." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,guo2024corpusbrain++,\cite{guo2024corpusbrain++},Corpusbrain++: A continual generative pre-training framework for knowledge-intensive language tasks,,,True,False,"Guo, Jiafeng and Zhou, Changjiang and Zhang, Ruqing and Chen, Jiangui and de Rijke, Maarten and Fan, Yixing and Cheng, Xueqi",2024.0,,,,arXiv preprint arXiv:2402.16767,Corpusbrain++: A continual generative pre-training framework for knowledge-intensive language tasks,[Literature Review] CorpusBrain++: A Continual Generative ...,https://www.themoonlight.io/en/review/corpusbrain-a-continual-generative-pre-training-framework-for-knowledge-intensive-language-tasks,"The paper ""CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks"" addresses the challenge of updating information" Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,chen2023continual,\cite{chen2023continual},Continual Learning for Generative Retrieval over Dynamic Corpora,http://arxiv.org/abs/2308.14968v1,"Generative retrieval (GR) directly predicts the identifiers of relevant documents (i.e., docids) based on a parametric model. It has achieved solid performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a static document collection. In many practical scenarios, however, document collections are dynamic, where new documents are continuously added to the corpus. The ability to incrementally index new documents while preserving the ability to answer queries with both previously and newly indexed relevant documents is vital to applying GR models. In this paper, we address this practical continual learning problem for GR. We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents. Empirical results demonstrate the effectiveness and efficiency of the proposed model.",True,True,"Chen, Jiangui and Zhang, Ruqing and Guo, Jiafeng and de Rijke, Maarten and Chen, Wei and Fan, Yixing and Cheng, Xueqi",2023.0,,,,,Continual Learning for Generative Retrieval over Dynamic Corpora,Continual Learning for Generative Retrieval over Dynamic Corpora,http://arxiv.org/pdf/2308.14968v1,"Generative retrieval (GR) directly predicts the identifiers of relevant documents (i.e., docids) based on a parametric model. It has achieved solid performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a static document collection. In many practical scenarios, however, document collections are dynamic, where new documents are continuously added to the corpus. The ability to incrementally index new documents while preserving the ability to answer queries with both previously and newly indexed relevant documents is vital to applying GR models. In this paper, we address this practical continual learning problem for GR. We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents. Empirical results demonstrate the effectiveness and efficiency of the proposed model." Replication and Exploration of Generative Retrieval over Dynamic Corpora,2504.17519v1,kim2023exploring,\cite{kim2023exploring},Exploring the Practicality of Generative Retrieval on Dynamic Corpora,http://arxiv.org/abs/2305.18952v5,"Benchmarking the performance of information retrieval (IR) is mostly conducted with a fixed set of documents (static corpora). However, in realistic scenarios, this is rarely the case and the documents to be retrieved are constantly updated and added. In this paper, we focus on Generative Retrievals (GR), which apply autoregressive language models to IR problems, and explore their adaptability and robustness in dynamic scenarios. We also conduct an extensive evaluation of computational and memory efficiency, crucial factors for real-world deployment of IR systems handling vast and ever-changing document collections. Our results on the StreamingQA benchmark demonstrate that GR is more adaptable to evolving knowledge (4-11%), robust in learning knowledge with temporal information, and efficient in terms of inference FLOPs (x2), indexing time (x6), and storage footprint (x4) compared to Dual Encoders (DE), which are commonly used in retrieval systems. Our paper highlights the potential of GR for future use in practical IR systems within dynamic environments.",True,True,"Kim, Chaeeun and Yoon, Soyoung and Lee, Hyunji and Jang, Joel and Yang, Sohee and Seo, Minjoon",2023.0,,,,arXiv preprint arXiv:2305.18952,Exploring the Practicality of Generative Retrieval on Dynamic Corpora,Exploring the Practicality of Generative Retrieval on Dynamic Corpora,http://arxiv.org/pdf/2305.18952v5,"Benchmarking the performance of information retrieval (IR) is mostly conducted with a fixed set of documents (static corpora). However, in realistic scenarios, this is rarely the case and the documents to be retrieved are constantly updated and added. In this paper, we focus on Generative Retrievals (GR), which apply autoregressive language models to IR problems, and explore their adaptability and robustness in dynamic scenarios. We also conduct an extensive evaluation of computational and memory efficiency, crucial factors for real-world deployment of IR systems handling vast and ever-changing document collections. Our results on the StreamingQA benchmark demonstrate that GR is more adaptable to evolving knowledge (4-11%), robust in learning knowledge with temporal information, and efficient in terms of inference FLOPs (x2), indexing time (x6), and storage footprint (x4) compared to Dual Encoders (DE), which are commonly used in retrieval systems. Our paper highlights the potential of GR for future use in practical IR systems within dynamic environments." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,wang_retrieving_2021,\cite{wang_retrieving_2021},"Retrieving Complex Tables with Multi-Granular Graph Representation Learning",http://arxiv.org/abs/2105.01736v1,"The task of natural language table retrieval (NLTR) seeks to retrieve semantically relevant tables based on natural language queries. Existing learning systems for this task often treat tables as plain text based on the assumption that tables are structured as dataframes. However, tables can have complex layouts which indicate diverse dependencies between subtable structures, such as nested headers. As a result, queries may refer to different spans of relevant content that is distributed across these structures. Moreover, such systems fail to generalize to novel scenarios beyond those seen in the training set. Prior methods are still distant from a generalizable solution to the NLTR problem, as they fall short in handling complex table layouts or queries over multiple granularities. To address these issues, we propose Graph-based Table Retrieval (GTR), a generalizable NLTR framework with multi-granular graph representation learning. In our framework, a table is first converted into a tabular graph, with cell nodes, row nodes and column nodes to capture content at different granularities. Then the tabular graph is input to a Graph Transformer model that can capture both table cell content and the layout structures. To enhance the robustness and generalizability of the model, we further incorporate a self-supervised pre-training task based on graph-context matching. Experimental results on two benchmarks show that our method leads to significant improvements over the current state-of-the-art systems. Further experiments demonstrate promising performance of our method on cross-dataset generalization, and enhanced capability of handling complex tables and fulfilling diverse query intents. Code and data are available at https://github.com/FeiWang96/GTR.",True,True,"Wang, Fei and Sun, Kexuan and Chen, Muhao and Pujara, Jay and Szekely, Pedro",2021.0,,,10.1145/3404835.3462909,,"Retrieving Complex Tables with Multi-Granular Graph Representation Learning",[PDF] Retrieving Complex Tables with Multi-Granular Graph ... - arXiv,https://arxiv.org/pdf/2105.01736,GTR leverages state-of-the-art graph representation learning techniques to capture both content and layout structures of complex tables. "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,trabelsi_strubert_2022,\cite{trabelsi_strubert_2022},StruBERT: Structure-aware BERT for Table Search and Matching,http://arxiv.org/abs/2203.14278v1,"A large amount of information is stored in data tables. Users can search for data tables using a keyword-based query. A table is composed primarily of data values that are organized in rows and columns providing implicit structural information. A table is usually accompanied by secondary information such as the caption, page title, etc., that form the textual information. Understanding the connection between the textual and structural information is an important yet neglected aspect in table retrieval as previous methods treat each source of information independently. In addition, users can search for data tables that are similar to an existing table, and this setting can be seen as a content-based table retrieval. In this paper, we propose StruBERT, a structure-aware BERT model that fuses the textual and structural information of a data table to produce context-aware representations for both textual and tabular content of a data table. StruBERT features are integrated in a new end-to-end neural ranking model to solve three table-related downstream tasks: keyword- and content-based table retrieval, and table similarity. We evaluate our approach using three datasets, and we demonstrate substantial improvements in terms of retrieval and classification metrics over state-of-the-art methods.",True,True,"Trabelsi, Mohamed and Chen, Zhiyu and Zhang, Shuo and Davison, Brian D. and Heflin, Jeff",2022.0,,,10.1145/3485447.3511972,,StruBERT: Structure-aware BERT for Table Search and Matching,[PDF] StruBERT: Structure-aware BERT for Table Search and Matching,https://www.cse.lehigh.edu/~brian/pubs/2022/WWW/StruBERT.pdf,"In this paper, we propose StruBERT, a structure-aware BERT model that fuses the textual and structural information of a data table to pro- duce context-aware" "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,wang_solo_2023,\cite{wang_solo_2023},"Solo: Data Discovery Using Natural Language Questions Via A Self-Supervised Approach",http://arxiv.org/abs/2301.03560v2,"Most deployed data discovery systems, such as Google Datasets, and open data portals only support keyword search. Keyword search is geared towards general audiences but limits the types of queries the systems can answer. We propose a new system that lets users write natural language questions directly. A major barrier to using this learned data discovery system is it needs expensive-to-collect training data, thus limiting its utility. In this paper, we introduce a self-supervised approach to assemble training datasets and train learned discovery systems without human intervention. It requires addressing several challenges, including the design of self-supervised strategies for data discovery, table representation strategies to feed to the models, and relevance models that work well with the synthetically generated questions. We combine all the above contributions into a system, Solo, that solves the problem end to end. The evaluation results demonstrate the new techniques outperform state-of-the-art approaches on well-known benchmarks. All in all, the technique is a stepping stone towards building learned discovery systems. The code is open-sourced at https://github.com/TheDataStation/solo",True,True,"Qiming Wang and Raul Castro Fernandez",2023.0,,,10.1145/3626756,Proc. {ACM} Manag. Data,"Solo: Data Discovery Using Natural Language Questions Via A Self-Supervised Approach",[PDF] Solo: Data Discovery Using Natural Language Questions Via A Self ...,https://arxiv.org/pdf/2301.03560,"Solo is a system that allows users to write natural language questions for data discovery, using a self-supervised approach to train the system." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,nargesian_table_2018,\cite{nargesian_table_2018},Table union search on open data,,,True,False,"Nargesian, Fatemeh and Zhu, Erkang and Pu, Ken Q. and Miller, Renée J.",2018.0,,,10.14778/3192965.3192973,Proc. VLDB Endow.,Table union search on open data,[PDF] Table Union Search on Open Data | Semantic Scholar,https://www.semanticscholar.org/paper/Table-Union-Search-on-Open-Data-Nargesian-Zhu/5cadff7988d29c1596689d5b864f87f371783a50,This work defines the table union search problem and presents a probabilistic solution for finding tables that are unionable with a query table within "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,bogatu_dataset_2020,\cite{bogatu_dataset_2020},Dataset Discovery in Data Lakes,http://arxiv.org/abs/2011.10427v1,"Data analytics stands to benefit from the increasing availability of datasets that are held without their conceptual relationships being explicitly known. When collected, these datasets form a data lake from which, by processes like data wrangling, specific target datasets can be constructed that enable value-adding analytics. Given the potential vastness of such data lakes, the issue arises of how to pull out of the lake those datasets that might contribute to wrangling out a given target. We refer to this as the problem of dataset discovery in data lakes and this paper contributes an effective and efficient solution to it. Our approach uses features of the values in a dataset to construct hash-based indexes that map those features into a uniform distance space. This makes it possible to define similarity distances between features and to take those distances as measurements of relatedness w.r.t. a target table. Given the latter (and exemplar tuples), our approach returns the most related tables in the lake. We provide a detailed description of the approach and report on empirical results for two forms of relatedness (unionability and joinability) comparing them with prior work, where pertinent, and showing significant improvements in all of precision, recall, target coverage, indexing and discovery times.",True,True,"Bogatu, Alex and Fernandes, Alvaro A. A. and Paton, Norman W. and Konstantinou, Nikolaos",2020.0,,,10.1109/ICDE48307.2020.00067,,Dataset Discovery in Data Lakes,Dataset Discovery in Data Lakes,http://arxiv.org/pdf/2011.10427v1,"Data analytics stands to benefit from the increasing availability of datasets that are held without their conceptual relationships being explicitly known. When collected, these datasets form a data lake from which, by processes like data wrangling, specific target datasets can be constructed that enable value-adding analytics. Given the potential vastness of such data lakes, the issue arises of how to pull out of the lake those datasets that might contribute to wrangling out a given target. We refer to this as the problem of dataset discovery in data lakes and this paper contributes an effective and efficient solution to it. Our approach uses features of the values in a dataset to construct hash-based indexes that map those features into a uniform distance space. This makes it possible to define similarity distances between features and to take those distances as measurements of relatedness w.r.t. a target table. Given the latter (and exemplar tuples), our approach returns the most related tables in the lake. We provide a detailed description of the approach and report on empirical results for two forms of relatedness (unionability and joinability) comparing them with prior work, where pertinent, and showing significant improvements in all of precision, recall, target coverage, indexing and discovery times." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,khatiwada_santos_2023,\cite{khatiwada_santos_2023},SANTOS: Relationship-based Semantic Table Union Search,http://arxiv.org/abs/2209.13589v1,"Existing techniques for unionable table search define unionability using metadata (tables must have the same or similar schemas) or column-based metrics (for example, the values in a table should be drawn from the same domain). In this work, we introduce the use of semantic relationships between pairs of columns in a table to improve the accuracy of union search. Consequently, we introduce a new notion of unionability that considers relationships between columns, together with the semantics of columns, in a principled way. To do so, we present two new methods to discover semantic relationship between pairs of columns. The first uses an existing knowledge base (KB), the second (which we call a ""synthesized KB"") uses knowledge from the data lake itself. We adopt an existing Table Union Search benchmark and present new (open) benchmarks that represent small and large real data lakes. We show that our new unionability search algorithm, called SANTOS, outperforms a state-of-the-art union search that uses a wide variety of column-based semantics, including word embeddings and regular expressions. We show empirically that our synthesized KB improves the accuracy of union search by representing relationship semantics that may not be contained in an available KB. This result hints at a promising future of creating a synthesized KBs from data lakes with limited KB coverage and using them for union search.",True,True,"Khatiwada, Aamod and Fan, Grace and Shraga, Roee and Chen, Zixuan and Gatterbauer, Wolfgang and Miller, Renée J. and Riedewald, Mirek",2023.0,,,10.1145/3588689,Proc. ACM Manag. Data,SANTOS: Relationship-based Semantic Table Union Search,SANTOS: Relationship-based Semantic Table Union Search,https://dl.acm.org/doi/10.1145/3588689,"Our new unionability search algorithm, called SANTOS, outperforms a state-of-the-art union search that uses a wide variety of column-based semantics." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,fan_semantics-aware_2023,\cite{fan_semantics-aware_2023},"Semantics-aware Dataset Discovery from Data Lakes with Contextualized Column-based Representation Learning",http://arxiv.org/abs/2210.01922v2,"Dataset discovery from data lakes is essential in many real application scenarios. In this paper, we propose Starmie, an end-to-end framework for dataset discovery from data lakes (with table union search as the main use case). Our proposed framework features a contrastive learning method to train column encoders from pre-trained language models in a fully unsupervised manner. The column encoder of Starmie captures the rich contextual semantic information within tables by leveraging a contrastive multi-column pre-training strategy. We utilize the cosine similarity between column embedding vectors as the column unionability score and propose a filter-and-verification framework that allows exploring a variety of design choices to compute the unionability score between two tables accordingly. Empirical evaluation results on real table benchmark datasets show that Starmie outperforms the best-known solutions in the effectiveness of table union search by 6.8 in MAP and recall. Moreover, Starmie is the first to employ the HNSW (Hierarchical Navigable Small World) index for accelerate query processing of table union search which provides a 3,000X performance gain over the linear scan baseline and a 400X performance gain over an LSH index (the state-of-the-art solution for data lake indexing).",True,True,"Grace Fan and Jin Wang and Yuliang Li and Dan Zhang and Ren{\'{e}}e J. Miller",2023.0,,,10.14778/3587136.3587146,Proc. {VLDB} Endow.,"Semantics-aware Dataset Discovery from Data Lakes with Contextualized Column-based Representation Learning",Semantics-aware Dataset Discovery from Data Lakes with ...,https://www.researchgate.net/publication/364194737_Semantics-aware_Dataset_Discovery_from_Data_Lakes_with_Contextualized_Column-based_Representation_Learning,Our proposed framework features a contrastive learning method to train column encoders from pre-trained language models in a fully unsupervised "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,hu_automatic_2023,\cite{hu_automatic_2023},Automatic {Table} {Union} {Search} with {Tabular} {Representation} {Learning},,,True,False,"Hu, Xuming and Wang, Shen and Qin, Xiao and Lei, Chuan and Shen, Zhengyuan and Faloutsos, Christos and Katsifodimos, Asterios and Karypis, George and Wen, Lijie and Yu, Philip S.",2023.0,,,10.18653/v1/2023.findings-acl.233,,Automatic {Table} {Union} {Search} with {Tabular} {Representation} {Learning},[PDF] Automatic Table Union Search with Tabular Representation Learning,https://aclanthology.org/2023.findings-acl.233.pdf,The table union search aims to find all tables in a data lake that have the columns from the same domain as the query ta- ble. With the help of "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,deng_turl_2020,\cite{deng_turl_2020},TURL: Table Understanding through Representation Learning,http://arxiv.org/abs/2006.14806v2,"Relational tables on the Web store a vast amount of knowledge. Owing to the wealth of such tables, there has been tremendous progress on a variety of tasks in the area of table understanding. However, existing work generally relies on heavily-engineered task-specific features and model architectures. In this paper, we present TURL, a novel framework that introduces the pre-training/fine-tuning paradigm to relational Web tables. During pre-training, our framework learns deep contextualized representations on relational tables in an unsupervised manner. Its universal model design with pre-trained representations can be applied to a wide range of tasks with minimal task-specific fine-tuning. Specifically, we propose a structure-aware Transformer encoder to model the row-column structure of relational tables, and present a new Masked Entity Recovery (MER) objective for pre-training to capture the semantics and knowledge in large-scale unlabeled data. We systematically evaluate TURL with a benchmark consisting of 6 different tasks for table understanding (e.g., relation extraction, cell filling). We show that TURL generalizes well to all tasks and substantially outperforms existing methods in almost all instances.",True,True,"Deng, Xiang and Sun, Huan and Lees, Alyssa and Wu, You and Yu, Cong",2020.0,,,10.14778/3430915.3430921,Proc. VLDB Endow.,TURL: Table Understanding through Representation Learning,TURL: Table Understanding through Representation Learning,http://arxiv.org/pdf/2006.14806v2,"Relational tables on the Web store a vast amount of knowledge. Owing to the wealth of such tables, there has been tremendous progress on a variety of tasks in the area of table understanding. However, existing work generally relies on heavily-engineered task-specific features and model architectures. In this paper, we present TURL, a novel framework that introduces the pre-training/fine-tuning paradigm to relational Web tables. During pre-training, our framework learns deep contextualized representations on relational tables in an unsupervised manner. Its universal model design with pre-trained representations can be applied to a wide range of tasks with minimal task-specific fine-tuning. Specifically, we propose a structure-aware Transformer encoder to model the row-column structure of relational tables, and present a new Masked Entity Recovery (MER) objective for pre-training to capture the semantics and knowledge in large-scale unlabeled data. We systematically evaluate TURL with a benchmark consisting of 6 different tasks for table understanding (e.g., relation extraction, cell filling). We show that TURL generalizes well to all tasks and substantially outperforms existing methods in almost all instances." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,yakout_infogather_2012,\cite{yakout_infogather_2012},{InfoGather}: entity augmentation and attribute discovery by holistic matching with web tables,,,True,False,"Yakout, Mohamed and Ganjam, Kris and Chakrabarti, Kaushik and Chaudhuri, Surajit",2012.0,,,10.1145/2213836.2213848,,{InfoGather}: entity augmentation and attribute discovery by holistic matching with web tables,entity augmentation and attribute discovery by holistic matching with ...,https://www.semanticscholar.org/paper/InfoGather%3A-entity-augmentation-and-attribute-by-Yakout-Ganjam/aab33faadc0b7d5193ee8df1b911db297be8c66b,"InfoGather: entity augmentation and attribute discovery by holistic matching with web tables · Mohamed Yakout, Kris Ganjam, +1 author. S. Chaudhuri · Published in" "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,zhu_josie_2019,\cite{zhu_josie_2019},{JOSIE}: {Overlap} {Set} {Similarity} {Search} for {Finding} {Joinable} {Tables} in {Data} {Lakes},,,True,False,"Zhu, Erkang and Deng, Dong and Nargesian, Fatemeh and Miller, Renée J.",2019.0,,,10.1145/3299869.3300065,,{JOSIE}: {Overlap} {Set} {Similarity} {Search} for {Finding} {Joinable} {Tables} in {Data} {Lakes},JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in ...,https://dl.acm.org/doi/10.1145/3299869.3300065,- JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes # JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes We show that JOSIE completely out performs the state-of-the-art overlap set similarity search techniques on data lakes. 1. JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes Similarity search for data streams has attracted much attention recently in the area of information recommendation. - Mann WAugsten NJensen CPawlik M(2024)SWOOP: top-k similarity joins over set streamsThe VLDB Journal — The International Journal on Very Large Data Bases10.1007/s00778-024-00880-x**34**:1Online publication date: 23-Dec-2024 "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,dong_efficient_2021,\cite{dong_efficient_2021},"Efficient Joinable Table Discovery in Data Lakes: A High-Dimensional Similarity-Based Approach",http://arxiv.org/abs/2010.13273v4,"Finding joinable tables in data lakes is key procedure in many applications such as data integration, data augmentation, data analysis, and data market. Traditional approaches that find equi-joinable tables are unable to deal with misspellings and different formats, nor do they capture any semantic joins. In this paper, we propose PEXESO, a framework for joinable table discovery in data lakes. We embed textual values as high-dimensional vectors and join columns under similarity predicates on high-dimensional vectors, hence to address the limitations of equi-join approaches and identify more meaningful results. To efficiently find joinable tables with similarity, we propose a block-and-verify method that utilizes pivot-based filtering. A partitioning technique is developed to cope with the case when the data lake is large and the index cannot fit in main memory. An experimental evaluation on real datasets shows that our solution identifies substantially more tables than equi-joins and outperforms other similarity-based options, and the join results are useful in data enrichment for machine learning tasks. The experiments also demonstrate the efficiency of the proposed method.",True,True,"Dong, Yuyang and Takeoka, Kunihiro and Xiao, Chuan and Oyamada, Masafumi",2021.0,,,10.1109/ICDE51399.2021.00046,,"Efficient Joinable Table Discovery in Data Lakes: A High-Dimensional Similarity-Based Approach",[PDF] Efficient Joinable Table Discovery in Data Lakes,https://www.semanticscholar.org/paper/Efficient-Joinable-Table-Discovery-in-Data-Lakes%3A-A-Dong-Takeoka/072d47348cd93b178749b59ab355aa255fa4eeff,"PEXESO is proposed, a framework for joinable table discovery in data lakes that identifies substantially more tables than equi-joins and outperforms other" "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,dong_deepjoin_2023,\cite{dong_deepjoin_2023},DeepJoin: Joinable Table Discovery with Pre-trained Language Models,http://arxiv.org/abs/2212.07588v2,"Due to the usefulness in data enrichment for data analysis tasks, joinable table discovery has become an important operation in data lake management. Existing approaches target equi-joins, the most common way of combining tables for creating a unified view, or semantic joins, which tolerate misspellings and different formats to deliver more join results. They are either exact solutions whose running time is linear in the sizes of query column and target table repository or approximate solutions lacking precision. In this paper, we propose Deepjoin, a deep learning model for accurate and efficient joinable table discovery. Our solution is an embedding-based retrieval, which employs a pre-trained language model (PLM) and is designed as one framework serving both equi- and semantic joins. We propose a set of contextualization options to transform column contents to a text sequence. The PLM reads the sequence and is fine-tuned to embed columns to vectors such that columns are expected to be joinable if they are close to each other in the vector space. Since the output of the PLM is fixed in length, the subsequent search procedure becomes independent of the column size. With a state-of-the-art approximate nearest neighbor search algorithm, the search time is logarithmic in the repository size. To train the model, we devise the techniques for preparing training data as well as data augmentation. The experiments on real datasets demonstrate that by training on a small subset of a corpus, Deepjoin generalizes to large datasets and its precision consistently outperforms other approximate solutions'. Deepjoin is even more accurate than an exact solution to semantic joins when evaluated with labels from experts. Moreover, when equipped with a GPU, Deepjoin is up to two orders of magnitude faster than existing solutions.",True,True,"Dong, Yuyang and Xiao, Chuan and Nozawa, Takuma and Enomoto, Masafumi and Oyamada, Masafumi",2023.0,,,10.14778/3603581.3603587,Proc. VLDB Endow.,DeepJoin: Joinable Table Discovery with Pre-trained Language Models,[PDF] DeepJoin: Joinable Table Discovery with Pre-trained Language ...,https://www.vldb.org/pvldb/vol16/p2458-dong.pdf,"DeepJoin is a deep learning model using a pre-trained language model for joinable table discovery, handling both equi- and semantic joins." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,liu_feature_2022,\cite{liu_feature_2022},Feature {Augmentation} with {Reinforcement} {Learning},,,True,False,"Liu, Jiabin and Chai, Chengliang and Luo, Yuyu and Lou, Yin and Feng, Jianhua and Tang, Nan",2022.0,,,10.1109/ICDE53745.2022.00317,,Feature {Augmentation} with {Reinforcement} {Learning},Feature Augmentation with Reinforcement Learning,https://ieeexplore.ieee.org/document/9835530,"by J Liu · 2022 · Cited by 54 — We propose a reinforcement learning based framework, namely AutoFeature, to augment the features following an exploration-exploitation strategy." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,chepurko_arda_2020,\cite{chepurko_arda_2020},ARDA: Automatic Relational Data Augmentation for Machine Learning,http://arxiv.org/abs/2003.09758v1,"Automatic machine learning (\AML) is a family of techniques to automate the process of training predictive models, aiming to both improve performance and make machine learning more accessible. While many recent works have focused on aspects of the machine learning pipeline like model selection, hyperparameter tuning, and feature selection, relatively few works have focused on automatic data augmentation. Automatic data augmentation involves finding new features relevant to the user's predictive task with minimal ``human-in-the-loop'' involvement. We present \system, an end-to-end system that takes as input a dataset and a data repository, and outputs an augmented data set such that training a predictive model on this augmented dataset results in improved performance. Our system has two distinct components: (1) a framework to search and join data with the input data, based on various attributes of the input, and (2) an efficient feature selection algorithm that prunes out noisy or irrelevant features from the resulting join. We perform an extensive empirical evaluation of different system components and benchmark our feature selection algorithm on real-world datasets.",True,True,"Chepurko, Nadiia and Marcus, Ryan and Zgraggen, Emanuel and Fernandez, Raul Castro and Kraska, Tim and Karger, David",2020.0,,,10.14778/3397230.3397235,Proc. VLDB Endow.,ARDA: Automatic Relational Data Augmentation for Machine Learning,ARDA: Automatic Relational Data Augmentation for Machine Learning,http://arxiv.org/pdf/2003.09758v1,"Automatic machine learning (\AML) is a family of techniques to automate the process of training predictive models, aiming to both improve performance and make machine learning more accessible. While many recent works have focused on aspects of the machine learning pipeline like model selection, hyperparameter tuning, and feature selection, relatively few works have focused on automatic data augmentation. Automatic data augmentation involves finding new features relevant to the user's predictive task with minimal ``human-in-the-loop'' involvement. We present \system, an end-to-end system that takes as input a dataset and a data repository, and outputs an augmented data set such that training a predictive model on this augmented dataset results in improved performance. Our system has two distinct components: (1) a framework to search and join data with the input data, based on various attributes of the input, and (2) an efficient feature selection algorithm that prunes out noisy or irrelevant features from the resulting join. We perform an extensive empirical evaluation of different system components and benchmark our feature selection algorithm on real-world datasets." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,Table2022dong,\cite{Table2022dong},Table Enrichment System for Machine Learning,http://arxiv.org/abs/2204.08235v1,"Data scientists are constantly facing the problem of how to improve prediction accuracy with insufficient tabular data. We propose a table enrichment system that enriches a query table by adding external attributes (columns) from data lakes and improves the accuracy of machine learning predictive models. Our system has four stages, join row search, task-related table selection, row and column alignment, and feature selection and evaluation, to efficiently create an enriched table for a given query table and a specified machine learning task. We demonstrate our system with a web UI to show the use cases of table enrichment.",True,True,"Dong, Yuyang and Oyamada, Masafumi",2022.0,,,10.1145/3477495.3531678,,Table Enrichment System for Machine Learning,Table Enrichment System for Machine Learning,http://arxiv.org/pdf/2204.08235v1,"Data scientists are constantly facing the problem of how to improve prediction accuracy with insufficient tabular data. We propose a table enrichment system that enriches a query table by adding external attributes (columns) from data lakes and improves the accuracy of machine learning predictive models. Our system has four stages, join row search, task-related table selection, row and column alignment, and feature selection and evaluation, to efficiently create an enriched table for a given query table and a specified machine learning task. We demonstrate our system with a web UI to show the use cases of table enrichment." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,zhang_ad_2018,\cite{zhang_ad_2018},Ad Hoc Table Retrieval using Semantic Similarity,http://arxiv.org/abs/1802.06159v3,"We introduce and address the problem of ad hoc table retrieval: answering a keyword query with a ranked list of tables. This task is not only interesting on its own account, but is also being used as a core component in many other table-based information access scenarios, such as table completion or table mining. The main novel contribution of this work is a method for performing semantic matching between queries and tables. Specifically, we (i) represent queries and tables in multiple semantic spaces (both discrete sparse and continuous dense vector representations) and (ii) introduce various similarity measures for matching those semantic representations. We consider all possible combinations of semantic representations and similarity measures and use these as features in a supervised learning model. Using a purpose-built test collection based on Wikipedia tables, we demonstrate significant and substantial improvements over a state-of-the-art baseline.",True,True,"Zhang, Shuo and Balog, Krisztian",2018.0,,,10.1145/3178876.3186067,,Ad Hoc Table Retrieval using Semantic Similarity,Ad Hoc Table Retrieval using Semantic Similarity,http://arxiv.org/pdf/1802.06159v3,"We introduce and address the problem of ad hoc table retrieval: answering a keyword query with a ranked list of tables. This task is not only interesting on its own account, but is also being used as a core component in many other table-based information access scenarios, such as table completion or table mining. The main novel contribution of this work is a method for performing semantic matching between queries and tables. Specifically, we (i) represent queries and tables in multiple semantic spaces (both discrete sparse and continuous dense vector representations) and (ii) introduce various similarity measures for matching those semantic representations. We consider all possible combinations of semantic representations and similarity measures and use these as features in a supervised learning model. Using a purpose-built test collection based on Wikipedia tables, we demonstrate significant and substantial improvements over a state-of-the-art baseline." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,deng2024lakebench,\cite{deng2024lakebench},LakeBench: A Benchmark for Discovering Joinable and Unionable Tables in Data Lakes,,,True,False,"Deng, Yuhao and Chai, Chengliang and Cao, Lei and Yuan, Qin and Chen, Siyuan and Yu, Yanrui and Sun, Zhaoze and Wang, Junyi and Li, Jiajun and Cao, Ziqi and others",2024.0,,,,Proc. VLDB Endow.,LakeBench: A Benchmark for Discovering Joinable and Unionable Tables in Data Lakes,[PDF] LakeBench: A Benchmark for Discovering Joinable and Unionable ...,https://www.vldb.org/pvldb/vol17/p1925-chai.pdf,Discovering tables from poorly maintained data lakes is a signifi- cant challenge in data management. Two key tasks are identifying joinable and unionable "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,opendata,\cite{opendata},OpenData,,,True,False,,,,https://open.canada.ca/,,,OpenData,NYC Open Data -,https://opendata.cityofnewyork.us/,"NYC Open Data logo # Open Data for All New Yorkers Open Data is free public data published by New York City agencies and other partners. Attend a training class or sign up for the NYC Open Data mailing list to get the latest news and find out about upcoming events. ### NYC Open Data Week Explore how other people use Open Data! NYC Open Data Week ### New to Open Data View details on Open Data APIs. Ask a question, leave a comment, or suggest a dataset to the NYC Open Data team. ## Discover NYC Data View recently published datasets on the data catalog. View some of the most popular datasets on the data catalog." "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,venetis_recovering_2011,\cite{venetis_recovering_2011},Recovering semantics of tables on the web,,,True,False,"Venetis, Petros and Halevy, Alon and Madhavan, Jayant and Paşca, Marius and Shen, Warren and Wu, Fei and Miao, Gengxin and Wu, Chung",2011.0,,,10.14778/2002938.2002939,Proc. VLDB Endow.,Recovering semantics of tables on the web,[PDF] Recovering Semantics of Tables on the Web - VLDB Endowment,http://www.vldb.org/pvldb/vol4/p528-venetis.pdf,"To recover semantics of tables, we leverage a database of class labels and relationships automatically extracted from the Web. The database of classes and" "NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery",2504.15849v1,cafarella2009data,\cite{cafarella2009data},Data integration for the relational web,,,True,False,"Cafarella, Michael J and Halevy, Alon and Khoussainova, Nodira",2009.0,,,,Proc. VLDB Endow.,Data integration for the relational web,Data Integration for the Relational Web.,https://dblp.org/rec/journals/pvldb/CafarellaHK09,"Michael J. Cafarella, Alon Y. Halevy, Nodira Khoussainova: Data Integration for the Relational Web. Proc. VLDB Endow. 2(1): 1090-1101 (2009)." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,lifairness,\cite{lifairness},"Fairness in Recommendation: Foundations, Methods and Applications",http://arxiv.org/abs/2205.13619v6,"As one of the most pervasive applications of machine learning, recommender systems are playing an important role on assisting human decision making. The satisfaction of users and the interests of platforms are closely related to the quality of the generated recommendation results. However, as a highly data-driven system, recommender system could be affected by data or algorithmic bias and thus generate unfair results, which could weaken the reliance of the systems. As a result, it is crucial to address the potential unfairness problems in recommendation settings. Recently, there has been growing attention on fairness considerations in recommender systems with more and more literature on approaches to promote fairness in recommendation. However, the studies are rather fragmented and lack a systematic organization, thus making it difficult to penetrate for new researchers to the domain. This motivates us to provide a systematic survey of existing works on fairness in recommendation. This survey focuses on the foundations for fairness in recommendation literature. It first presents a brief introduction about fairness in basic machine learning tasks such as classification and ranking in order to provide a general overview of fairness research, as well as introduce the more complex situations and challenges that need to be considered when studying fairness in recommender systems. After that, the survey will introduce fairness in recommendation with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation. The survey also talks about the challenges and opportunities in fairness research with the hope of promoting the fair recommendation research area and beyond.",True,True,"Li, Yunqi and Chen, Hanxiong and Xu, Shuyuan and Ge, Yingqiang and Tan, Juntao and Liu, Shuchang and Zhang, Yongfeng",,,,,ACM Transactions on Intelligent Systems and Technology,"Fairness in Recommendation: Foundations, Methods and Applications","Fairness in Recommendation: Foundations, Methods and Applications",http://arxiv.org/pdf/2205.13619v6,"As one of the most pervasive applications of machine learning, recommender systems are playing an important role on assisting human decision making. The satisfaction of users and the interests of platforms are closely related to the quality of the generated recommendation results. However, as a highly data-driven system, recommender system could be affected by data or algorithmic bias and thus generate unfair results, which could weaken the reliance of the systems. As a result, it is crucial to address the potential unfairness problems in recommendation settings. Recently, there has been growing attention on fairness considerations in recommender systems with more and more literature on approaches to promote fairness in recommendation. However, the studies are rather fragmented and lack a systematic organization, thus making it difficult to penetrate for new researchers to the domain. This motivates us to provide a systematic survey of existing works on fairness in recommendation. This survey focuses on the foundations for fairness in recommendation literature. It first presents a brief introduction about fairness in basic machine learning tasks such as classification and ranking in order to provide a general overview of fairness research, as well as introduce the more complex situations and challenges that need to be considered when studying fairness in recommender systems. After that, the survey will introduce fairness in recommendation with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation. The survey also talks about the challenges and opportunities in fairness research with the hope of promoting the fair recommendation research area and beyond." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,lipani2016fairness,\cite{lipani2016fairness},Fairness in Information Retrieval,,,True,False,"Lipani, Aldo",2016.0,,,,,Fairness in Information Retrieval,FAIR: Fairness-Aware Information Retrieval Evaluation,https://arxiv.org/abs/2106.08527,"by R Gao · 2021 · Cited by 33 — We propose a new metric called FAIR. By unifying standard IR metrics and fairness measures into an integrated metric, this metric offers a new perspective for" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,deldjoo2022survey,\cite{deldjoo2022survey},A Survey of Research on Fair Recommender Systems,,,True,False,"Deldjoo, Yashar and Jannach, Dietmar and Bellogin, Alejandro and Difonzo, Alessandro and Zanzonelli, Dario",2022.0,,,,arXiv preprint arXiv:2205.11127,A Survey of Research on Fair Recommender Systems,A Survey of Research on Fair Recommender Systems - OpenReview,https://openreview.net/forum?id=K7emU6kWa9,"In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,xu2025fairdiversecomprehensivetoolkitfair,\cite{xu2025fairdiversecomprehensivetoolkitfair},"FairDiverse: A Comprehensive Toolkit for Fair and Diverse Information Retrieval Algorithms",http://arxiv.org/abs/2502.11883v1,"In modern information retrieval (IR). achieving more than just accuracy is essential to sustaining a healthy ecosystem, especially when addressing fairness and diversity considerations. To meet these needs, various datasets, algorithms, and evaluation frameworks have been introduced. However, these algorithms are often tested across diverse metrics, datasets, and experimental setups, leading to inconsistencies and difficulties in direct comparisons. This highlights the need for a comprehensive IR toolkit that enables standardized evaluation of fairness- and diversity-aware algorithms across different IR tasks. To address this challenge, we present FairDiverse, an open-source and standardized toolkit. FairDiverse offers a framework for integrating fair and diverse methods, including pre-processing, in-processing, and post-processing techniques, at different stages of the IR pipeline. The toolkit supports the evaluation of 28 fairness and diversity algorithms across 16 base models, covering two core IR tasks (search and recommendation) thereby establishing a comprehensive benchmark. Moreover, FairDiverse is highly extensible, providing multiple APIs that empower IR researchers to swiftly develop and evaluate their own fairness and diversity aware models, while ensuring fair comparisons with existing baselines. The project is open-sourced and available on https://github.com/XuChen0427/FairDiverse.",True,True,Chen Xu and Zhirui Deng and Clara Rus and Xiaopeng Ye and Yuanna Liu and Jun Xu and Zhicheng Dou and Ji-Rong Wen and Maarten de Rijke,2025.0,,https://arxiv.org/abs/2502.11883,,,"FairDiverse: A Comprehensive Toolkit for Fair and Diverse Information Retrieval Algorithms",FairDiverse: A Comprehensive Toolkit for Fair and Diverse ... - arXiv,https://arxiv.org/html/2502.11883v1,"FairDiverse offers a framework for integrating fairness- and diversity-focused methods, including pre-processing, in-processing, and post-processing techniques." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,Calmon17,\cite{Calmon17},Optimized Data Pre-Processing for Discrimination Prevention,http://arxiv.org/abs/1704.03354v1,"Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective, and apply two instances of the proposed optimization to datasets, including one on real-world criminal recidivism. The results demonstrate that all three criteria can be simultaneously achieved and also reveal interesting patterns of bias in American society.",True,True,"Calmon, Flavio P. and Wei, Dennis and Vinzamuri, Bhanukiran and Ramamurthy, Karthikeyan Natesan and Varshney, Kush R.",2017.0,,,,,Optimized Data Pre-Processing for Discrimination Prevention,[PDF] Optimized Pre-Processing for Discrimination Prevention - NIPS,http://papers.neurips.cc/paper/6988-optimized-pre-processing-for-discrimination-prevention.pdf,"We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,xiong2024fairwasp,\cite{xiong2024fairwasp},FairWASP: Fast and Optimal Fair Wasserstein Pre-processing,http://arxiv.org/abs/2311.00109v3,"Recent years have seen a surge of machine learning approaches aimed at reducing disparities in model outputs across different subgroups. In many settings, training data may be used in multiple downstream applications by different users, which means it may be most effective to intervene on the training data itself. In this work, we present FairWASP, a novel pre-processing approach designed to reduce disparities in classification datasets without modifying the original data. FairWASP returns sample-level weights such that the reweighted dataset minimizes the Wasserstein distance to the original dataset while satisfying (an empirical version of) demographic parity, a popular fairness criterion. We show theoretically that integer weights are optimal, which means our method can be equivalently understood as duplicating or eliminating samples. FairWASP can therefore be used to construct datasets which can be fed into any classification method, not just methods which accept sample weights. Our work is based on reformulating the pre-processing task as a large-scale mixed-integer program (MIP), for which we propose a highly efficient algorithm based on the cutting plane method. Experiments demonstrate that our proposed optimization algorithm significantly outperforms state-of-the-art commercial solvers in solving both the MIP and its linear program relaxation. Further experiments highlight the competitive performance of FairWASP in reducing disparities while preserving accuracy in downstream classification settings.",True,True,"Xiong, Zikai and Dalmasso, Niccol{\`o} and Mishler, Alan and Potluru, Vamsi K and Balch, Tucker and Veloso, Manuela",2024.0,,,,,FairWASP: Fast and Optimal Fair Wasserstein Pre-processing,[PDF] FairWASP: Fast and Optimal Fair Wasserstein Pre-processing,https://ojs.aaai.org/index.php/AAAI/article/view/29545/30909,"In this work, we present FairWASP, a novel pre-processing approach designed to reduce dispar- ities in classification datasets without modifying the origi- nal" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,Tang23FairBias,\cite{Tang23FairBias},When Fairness meets Bias: a Debiased Framework for Fairness aware Top-N Recommendation,,,True,False,"Tang, Jiakai and Shen, Shiqi and Wang, Zhipeng and Gong, Zhi and Zhang, Jingsen and Chen, Xu",2023.0,,,10.1145/3604915.3608770,,When Fairness meets Bias: a Debiased Framework for Fairness aware Top-N Recommendation,a Debiased Framework for Fairness aware Top-N ...,https://openreview.net/forum?id=gb0XymwzJq&referrer=%5Bthe%20profile%20of%20Jiakai%20Tang%5D(%2Fprofile%3Fid%3D~Jiakai_Tang1),"To study this problem, in this paper, we formally define a novel task named as unbiased fairness aware Top-N recommendation. For solving this task, we firstly" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,xu2023p,\cite{xu2023p},P-MMF: Provider Max-min Fairness Re-ranking in Recommender System,,,True,False,"Xu, Chen and Chen, Sirui and Xu, Jun and Shen, Weiran and Zhang, Xiao and Wang, Gang and Dong, Zhenhua",2023.0,,,,,P-MMF: Provider Max-min Fairness Re-ranking in Recommender System,[2303.06660] P-MMF: Provider Max-min Fairness Re- ...,https://arxiv.org/abs/2303.06660,"[2303.06660] P-MMF: Provider Max-min Fairness Re-ranking in Recommender System Title:P-MMF: Provider Max-min Fairness Re-ranking in Recommender System View a PDF of the paper titled P-MMF: Provider Max-min Fairness Re-ranking in Recommender System, by Chen Xu and 6 other authors In this paper, we proposed an online re-ranking model named Provider Max-min Fairness Re-ranking (P-MMF) to tackle the problem. View a PDF of the paper titled P-MMF: Provider Max-min Fairness Re-ranking in Recommender System, by Chen Xu and 6 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle - [x] Core recommender toggle " "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,fairrec,\cite{fairrec},"FairRec: Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms",http://arxiv.org/abs/2002.10764v2,"We investigate the problem of fair recommendation in the context of two-sided online platforms, comprising customers on one side and producers on the other. Traditionally, recommendation services in these platforms have focused on maximizing customer satisfaction by tailoring the results according to the personalized preferences of individual customers. However, our investigation reveals that such customer-centric design may lead to unfair distribution of exposure among the producers, which may adversely impact their well-being. On the other hand, a producer-centric design might become unfair to the customers. Thus, we consider fairness issues that span both customers and producers. Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods. Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer. Extensive evaluations over multiple real-world datasets show the effectiveness of FairRec in ensuring two-sided fairness while incurring a marginal loss in the overall recommendation quality.",True,True,"Patro, Gourab K. and Biswas, Arpita and Ganguly, Niloy and Gummadi, Krishna P. and Chakraborty, Abhijnan",2020.0,,,,,"FairRec: Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms",Two-Sided Fairness for Personalized Recommendations in ...,https://github.com/gourabkumarpatro/FairRec_www_2020,"FairRec: Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms. Gourab K Patro, Arpita Biswas, Niloy Ganguly, Krishna P. Gummadi and" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,abdollahpouri2020multistakeholder,\cite{abdollahpouri2020multistakeholder},Multistakeholder Recommendation: Survey and Research Directions,,,True,False,"Abdollahpouri, Himan and Adomavicius, Gediminas and Burke, Robin and Guy, Ido and Jannach, Dietmar and Kamishima, Toshihiro and Krasnodebski, Jan and Pizzato, Luiz",2020.0,,,,User Modeling and User-Adapted Interaction,Multistakeholder Recommendation: Survey and Research Directions,Multistakeholder recommendation: Survey and research directions,https://experts.colorado.edu/display/pubid_280350,Multistakeholder recommendation: Survey and research directions | CU Experts | CU Boulder. "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,abdollahpouri2019multi,\cite{abdollahpouri2019multi},"Multi-stakeholder Recommendation and its Connection to Multi-sided Fairness",http://arxiv.org/abs/1907.13158v1,"There is growing research interest in recommendation as a multi-stakeholder problem, one where the interests of multiple parties should be taken into account. This category subsumes some existing well-established areas of recommendation research including reciprocal and group recommendation, but a detailed taxonomy of different classes of multi-stakeholder recommender systems is still lacking. Fairness-aware recommendation has also grown as a research area, but its close connection with multi-stakeholder recommendation is not always recognized. In this paper, we define the most commonly observed classes of multi-stakeholder recommender systems and discuss how different fairness concerns may come into play in such systems.",True,True,"Abdollahpouri, Himan and Burke, Robin",2019.0,,,,arXiv preprint arXiv:1907.13158,"Multi-stakeholder Recommendation and its Connection to Multi-sided Fairness",Multi-stakeholder Recommendation and its Connection to ...,https://www.researchgate.net/publication/334821953_Multi-stakeholder_Recommendation_and_its_Connection_to_Multi-sided_Fairness,"In this paper, we define the most commonly observed classes of multi-stakeholder recommender systems and discuss how different fairness concerns may come into" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,abdollahpouri2019unfairness,\cite{abdollahpouri2019unfairness},The Unfairness of Popularity Bias in Recommendation,http://arxiv.org/abs/1907.13286v3,"Recommender systems are known to suffer from the popularity bias problem: popular (i.e. frequently rated) items get a lot of exposure while less popular ones are under-represented in the recommendations. Research in this area has been mainly focusing on finding ways to tackle this issue by increasing the number of recommended long-tail items or otherwise the overall catalog coverage. In this paper, however, we look at this problem from the users' perspective: we want to see how popularity bias causes the recommendations to deviate from what the user expects to get from the recommender system. We define three different groups of users according to their interest in popular items (Niche, Diverse and Blockbuster-focused) and show the impact of popularity bias on the users in each group. Our experimental results on a movie dataset show that in many recommendation algorithms the recommendations the users get are extremely concentrated on popular items even if a user is interested in long-tail and non-popular items showing an extreme bias disparity.",True,True,"Abdollahpouri, Himan and Mansoury, Masoud and Burke, Robin and Mobasher, Bamshad",2019.0,,,,arXiv preprint arXiv:1907.13286,The Unfairness of Popularity Bias in Recommendation,The Unfairness of Popularity Bias in Recommendation,http://arxiv.org/pdf/1907.13286v3,"Recommender systems are known to suffer from the popularity bias problem: popular (i.e. frequently rated) items get a lot of exposure while less popular ones are under-represented in the recommendations. Research in this area has been mainly focusing on finding ways to tackle this issue by increasing the number of recommended long-tail items or otherwise the overall catalog coverage. In this paper, however, we look at this problem from the users' perspective: we want to see how popularity bias causes the recommendations to deviate from what the user expects to get from the recommender system. We define three different groups of users according to their interest in popular items (Niche, Diverse and Blockbuster-focused) and show the impact of popularity bias on the users in each group. Our experimental results on a movie dataset show that in many recommendation algorithms the recommendations the users get are extremely concentrated on popular items even if a user is interested in long-tail and non-popular items showing an extreme bias disparity." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,li2021user,\cite{li2021user},User-oriented Fairness in Recommendation,http://arxiv.org/abs/2104.10671v1,"As a highly data-driven application, recommender systems could be affected by data bias, resulting in unfair results for different data groups, which could be a reason that affects the system performance. Therefore, it is important to identify and solve the unfairness issues in recommendation scenarios. In this paper, we address the unfairness problem in recommender systems from the user perspective. We group users into advantaged and disadvantaged groups according to their level of activity, and conduct experiments to show that current recommender systems will behave unfairly between two groups of users. Specifically, the advantaged users (active) who only account for a small proportion in data enjoy much higher recommendation quality than those disadvantaged users (inactive). Such bias can also affect the overall performance since the disadvantaged users are the majority. To solve this problem, we provide a re-ranking approach to mitigate this unfairness problem by adding constraints over evaluation metrics. The experiments we conducted on several real-world datasets with various recommendation algorithms show that our approach can not only improve group fairness of users in recommender systems, but also achieve better overall recommendation performance.",True,True,"Li, Yunqi and Chen, Hanxiong and Fu, Zuohui and Ge, Yingqiang and Zhang, Yongfeng",2021.0,,,,,User-oriented Fairness in Recommendation,User-oriented Fairness in Recommendation,https://dl.acm.org/doi/10.1145/3442381.3449866,"In this paper, we address the unfairness problem in recommender systems from the user perspective. We group users into advantaged and disadvantaged groups." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,TaxRank,\cite{TaxRank},A Taxation Perspective for Fair Re-ranking,http://arxiv.org/abs/2404.17826v1,"Fair re-ranking aims to redistribute ranking slots among items more equitably to ensure responsibility and ethics. The exploration of redistribution problems has a long history in economics, offering valuable insights for conceptualizing fair re-ranking as a taxation process. Such a formulation provides us with a fresh perspective to re-examine fair re-ranking and inspire the development of new methods. From a taxation perspective, we theoretically demonstrate that most previous fair re-ranking methods can be reformulated as an item-level tax policy. Ideally, a good tax policy should be effective and conveniently controllable to adjust ranking resources. However, both empirical and theoretical analyses indicate that the previous item-level tax policy cannot meet two ideal controllable requirements: (1) continuity, ensuring minor changes in tax rates result in small accuracy and fairness shifts; (2) controllability over accuracy loss, ensuring precise estimation of the accuracy loss under a specific tax rate. To overcome these challenges, we introduce a new fair re-ranking method named Tax-rank, which levies taxes based on the difference in utility between two items. Then, we efficiently optimize such an objective by utilizing the Sinkhorn algorithm in optimal transport. Upon a comprehensive analysis, Our model Tax-rank offers a superior tax policy for fair re-ranking, theoretically demonstrating both continuity and controllability over accuracy loss. Experimental results show that Tax-rank outperforms all state-of-the-art baselines in terms of effectiveness and efficiency on recommendation and advertising tasks.",True,True,"Xu, Chen and Ye, Xiaopeng and Wang, Wenjie and Pang, Liang and Xu, Jun and Chua, Tat-Seng",2024.0,,https://doi.org/10.1145/3626772.3657766,10.1145/3626772.3657766,,A Taxation Perspective for Fair Re-ranking,[PDF] A Taxation Perspective for Fair Re-ranking,https://gsai.ruc.edu.cn/uploads/20240924/2da852a5ebce07442e6392b4505ea4aa.pdf,ABSTRACT. Fair re-ranking aims to redistribute ranking slots among items more equitably to ensure responsibility and ethics. "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,singh2019policy,\cite{singh2019policy},Policy Learning for Fairness in Ranking,http://arxiv.org/abs/1902.04056v2,"Conventional Learning-to-Rank (LTR) methods optimize the utility of the rankings to the users, but they are oblivious to their impact on the ranked items. However, there has been a growing understanding that the latter is important to consider for a wide range of ranking applications (e.g. online marketplaces, job placement, admissions). To address this need, we propose a general LTR framework that can optimize a wide range of utility metrics (e.g. NDCG) while satisfying fairness of exposure constraints with respect to the items. This framework expands the class of learnable ranking functions to stochastic ranking policies, which provides a language for rigorously expressing fairness specifications. Furthermore, we provide a new LTR algorithm called Fair-PG-Rank for directly searching the space of fair ranking policies via a policy-gradient approach. Beyond the theoretical evidence in deriving the framework and the algorithm, we provide empirical results on simulated and real-world datasets verifying the effectiveness of the approach in individual and group-fairness settings.",True,True,"Singh, Ashudeep and Joachims, Thorsten",2019.0,,,,Advances in Neural Information Processing Systems,Policy Learning for Fairness in Ranking,Policy Learning for Fairness in Ranking,http://arxiv.org/pdf/1902.04056v2,"Conventional Learning-to-Rank (LTR) methods optimize the utility of the rankings to the users, but they are oblivious to their impact on the ranked items. However, there has been a growing understanding that the latter is important to consider for a wide range of ranking applications (e.g. online marketplaces, job placement, admissions). To address this need, we propose a general LTR framework that can optimize a wide range of utility metrics (e.g. NDCG) while satisfying fairness of exposure constraints with respect to the items. This framework expands the class of learnable ranking functions to stochastic ranking policies, which provides a language for rigorously expressing fairness specifications. Furthermore, we provide a new LTR algorithm called Fair-PG-Rank for directly searching the space of fair ranking policies via a policy-gradient approach. Beyond the theoretical evidence in deriving the framework and the algorithm, we provide empirical results on simulated and real-world datasets verifying the effectiveness of the approach in individual and group-fairness settings." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,jaenich2024fairness,\cite{jaenich2024fairness},Fairness-Aware Exposure Allocation via Adaptive Reranking,,,True,False,"Jaenich, Thomas and McDonald, Graham and Ounis, Iadh",2024.0,,,,,Fairness-Aware Exposure Allocation via Adaptive Reranking,[PDF] Fairness-Aware Exposure Allocation via Adaptive Reranking,https://eprints.gla.ac.uk/323883/1/323883.pdf,"In this paper, we explore how adaptive re-ranking affects the fair distribution of exposure, compared to a standard re-ranking. 1504. Page 2" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,TaoSIGIRAP,\cite{TaoSIGIRAP},Vertical Allocation-based Fair Exposure Amortizing in Ranking,http://arxiv.org/abs/2204.03046v2,"Result ranking often affects consumer satisfaction as well as the amount of exposure each item receives in the ranking services. Myopically maximizing customer satisfaction by ranking items only according to relevance will lead to unfair distribution of exposure for items, followed by unfair opportunities and economic gains for item producers/providers. Such unfairness will force providers to leave the system and discourage new providers from coming in. Eventually, fewer purchase options would be left for consumers, and the utilities of both consumers and providers would be harmed. Thus, to maintain a balance between ranking relevance and fairness is crucial for both parties. In this paper, we focus on the exposure fairness in ranking services. We demonstrate that existing methods for amortized fairness optimization could be suboptimal in terms of fairness-relevance tradeoff because they fail to utilize the prior knowledge of consumers. We further propose a novel algorithm named Vertical Allocation-based Fair Exposure Amortizing in Ranking, or VerFair, to reach a better balance between exposure fairness and ranking performance. Extensive experiments on three real-world datasets show that VerFair significantly outperforms state-of-the-art fair ranking algorithms in fairness-performance trade-offs from both the individual level and the group level.",True,True,"Yang, Tao and Xu, Zhichao and Ai, Qingyao",2023.0,,https://doi.org/10.1145/3624918.3625313,10.1145/3624918.3625313,,Vertical Allocation-based Fair Exposure Amortizing in Ranking,Vertical Allocation-based Fair Exposure Amortizing in ...,https://arxiv.org/abs/2204.03046,"by T Yang · 2022 · Cited by 10 — A novel algorithm named Vertical Allocation-based Fair Exposure Amortizing in Ranking, or VerFair, to reach a better balance between exposure fairness and" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,do2022optimizing,\cite{do2022optimizing},Optimizing generalized Gini indices for fairness in rankings,http://arxiv.org/abs/2204.06521v4,"There is growing interest in designing recommender systems that aim at being fair towards item producers or their least satisfied users. Inspired by the domain of inequality measurement in economics, this paper explores the use of generalized Gini welfare functions (GGFs) as a means to specify the normative criterion that recommender systems should optimize for. GGFs weight individuals depending on their ranks in the population, giving more weight to worse-off individuals to promote equality. Depending on these weights, GGFs minimize the Gini index of item exposure to promote equality between items, or focus on the performance on specific quantiles of least satisfied users. GGFs for ranking are challenging to optimize because they are non-differentiable. We resolve this challenge by leveraging tools from non-smooth optimization and projection operators used in differentiable sorting. We present experiments using real datasets with up to 15k users and items, which show that our approach obtains better trade-offs than the baselines on a variety of recommendation tasks and fairness criteria.",True,True,"Do, Virginie and Usunier, Nicolas",2022.0,,,,,Optimizing generalized Gini indices for fairness in rankings,Optimizing generalized Gini indices for fairness in rankings,http://arxiv.org/pdf/2204.06521v4,"There is growing interest in designing recommender systems that aim at being fair towards item producers or their least satisfied users. Inspired by the domain of inequality measurement in economics, this paper explores the use of generalized Gini welfare functions (GGFs) as a means to specify the normative criterion that recommender systems should optimize for. GGFs weight individuals depending on their ranks in the population, giving more weight to worse-off individuals to promote equality. Depending on these weights, GGFs minimize the Gini index of item exposure to promote equality between items, or focus on the performance on specific quantiles of least satisfied users. GGFs for ranking are challenging to optimize because they are non-differentiable. We resolve this challenge by leveraging tools from non-smooth optimization and projection operators used in differentiable sorting. We present experiments using real datasets with up to 15k users and items, which show that our approach obtains better trade-offs than the baselines on a variety of recommendation tasks and fairness criteria." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,cpfair,\cite{cpfair},"CPFair: Personalized Consumer and Producer Fairness Re-ranking for Recommender Systems",http://arxiv.org/abs/2204.08085v1,"Recently, there has been a rising awareness that when machine learning (ML) algorithms are used to automate choices, they may treat/affect individuals unfairly, with legal, ethical, or economic consequences. Recommender systems are prominent examples of such ML systems that assist users in making high-stakes judgments. A common trend in the previous literature research on fairness in recommender systems is that the majority of works treat user and item fairness concerns separately, ignoring the fact that recommender systems operate in a two-sided marketplace. In this work, we present an optimization-based re-ranking approach that seamlessly integrates fairness constraints from both the consumer and producer-side in a joint objective framework. We demonstrate through large-scale experiments on 8 datasets that our proposed method is capable of improving both consumer and producer fairness without reducing overall recommendation quality, demonstrating the role algorithms may play in minimizing data biases.",True,True,"Naghiaei, Mohammadmehdi and Rahmani, Hossein A and Deldjoo, Yashar",2022.0,,,,arXiv preprint arXiv:2204.08085,"CPFair: Personalized Consumer and Producer Fairness Re-ranking for Recommender Systems",CPFair: Personalized Consumer and Producer Fairness Re-ranking ...,https://arxiv.org/abs/2204.08085,We present an optimization-based re-ranking approach that seamlessly integrates fairness constraints from both the consumer and producer-side in a joint "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,wu2021tfrom,\cite{wu2021tfrom},"TFROM: A Two-sided Fairness-Aware Recommendation Model for Both Customers and Providers",http://arxiv.org/abs/2104.09024v1,"At present, most research on the fairness of recommender systems is conducted either from the perspective of customers or from the perspective of product (or service) providers. However, such a practice ignores the fact that when fairness is guaranteed to one side, the fairness and rights of the other side are likely to reduce. In this paper, we consider recommendation scenarios from the perspective of two sides (customers and providers). From the perspective of providers, we consider the fairness of the providers' exposure in recommender system. For customers, we consider the fairness of the reduced quality of recommendation results due to the introduction of fairness measures. We theoretically analyzed the relationship between recommendation quality, customers fairness, and provider fairness, and design a two-sided fairness-aware recommendation model (TFROM) for both customers and providers. Specifically, we design two versions of TFROM for offline and online recommendation. The effectiveness of the model is verified on three real-world data sets. The experimental results show that TFROM provides better two-sided fairness while still maintaining a higher level of personalization than the baseline algorithms.",True,True,"Wu, Yao and Cao, Jian and Xu, Guandong and Tan, Yudong",2021.0,,,,,"TFROM: A Two-sided Fairness-Aware Recommendation Model for Both Customers and Providers",TFROM: A Two-sided Fairness-Aware Recommendation Model for ...,https://arxiv.org/abs/2104.09024,"In this paper, we consider recommendation scenarios from the perspective of two sides (customers and providers). From the perspective of" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,fairrecplus,\cite{fairrecplus},Towards Fair Recommendation in Two-Sided Platforms,http://arxiv.org/abs/2201.01180v1,"Many online platforms today (such as Amazon, Netflix, Spotify, LinkedIn, and AirBnB) can be thought of as two-sided markets with producers and customers of goods and services. Traditionally, recommendation services in these platforms have focused on maximizing customer satisfaction by tailoring the results according to the personalized preferences of individual customers. However, our investigation reinforces the fact that such customer-centric design of these services may lead to unfair distribution of exposure to the producers, which may adversely impact their well-being. On the other hand, a pure producer-centric design might become unfair to the customers. As more and more people are depending on such platforms to earn a living, it is important to ensure fairness to both producers and customers. In this work, by mapping a fair personalized recommendation problem to a constrained version of the problem of fairly allocating indivisible goods, we propose to provide fairness guarantees for both sides. Formally, our proposed {\em FairRec} algorithm guarantees Maxi-Min Share ($\alpha$-MMS) of exposure for the producers, and Envy-Free up to One Item (EF1) fairness for the customers. Extensive evaluations over multiple real-world datasets show the effectiveness of {\em FairRec} in ensuring two-sided fairness while incurring a marginal loss in overall recommendation quality. Finally, we present a modification of FairRec (named as FairRecPlus) that at the cost of additional computation time, improves the recommendation performance for the customers, while maintaining the same fairness guarantees.",True,True,"Biswas, Arpita and Patro, Gourab K. and Ganguly, Niloy and Gummadi, Krishna P and Chakraborty, Abhijnan",2021.0,,,,ACM Transactions on the Web (TWEB),Towards Fair Recommendation in Two-Sided Platforms,Toward Fair Recommendation in Two-sided Platforms,https://dl.acm.org/doi/10.1145/3503624,"While FairRec provides two-sided fair recommendations, it can be further tweaked to improve the recommendation performance for the customers. We" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,zafar2019fairness,\cite{zafar2019fairness},Fairness Constraints: A Flexible Approach for Fair Classification,,,True,False,"Zafar, Muhammad Bilal and Valera, Isabel and Gomez-Rodriguez, Manuel and Gummadi, Krishna P",2019.0,,,,The Journal of Machine Learning Research,Fairness Constraints: A Flexible Approach for Fair Classification,Fairness Constraints: A Flexible Approach for Fair Classification,https://jmlr.org/papers/v20/18-262.html,"Image 1 Image 2: RSS Feed In this context, there is a need for computational techniques to limit unfairness in algorithmic decision making. In this work, we take a step forward to fulfill that need and introduce a flexible constraint-based framework to enable the design of fair margin-based classifiers. The main technical innovation of our framework is a general and intuitive measure of decision boundary unfairness, which serves as a tractable proxy to several of the most popular computational definitions of unfairness from the literature. Leveraging our measure, we can reduce the design of fair margin-based classifiers to adding tractable constraints on their decision boundaries. Experiments on multiple synthetic and real-world datasets show that our framework is able to successfully limit unfairness, often at a small cost in terms of accuracy." "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,lambert1992distribution,\cite{lambert1992distribution},The Distribution and Redistribution of Income,,,True,False,"Lambert, Peter J.",1992.0,,,,,The Distribution and Redistribution of Income,[PDF] The distribution and redistribution of income - Cornell eCommons,https://ecommons.cornell.edu/bitstreams/4ec59bd5-8672-42b0-985c-9efd84472f75/download,"This book seeks ""to bring together, in a single body, the many strands of formal analysis of income distribution and redistribution which have developed since" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,saito2022fair,\cite{saito2022fair},"Fair Ranking as Fair Division: Impact-Based Individual Fairness in Ranking",http://arxiv.org/abs/2206.07247v2,"Rankings have become the primary interface in two-sided online markets. Many have noted that the rankings not only affect the satisfaction of the users (e.g., customers, listeners, employers, travelers), but that the position in the ranking allocates exposure -- and thus economic opportunity -- to the ranked items (e.g., articles, products, songs, job seekers, restaurants, hotels). This has raised questions of fairness to the items, and most existing works have addressed fairness by explicitly linking item exposure to item relevance. However, we argue that any particular choice of such a link function may be difficult to defend, and we show that the resulting rankings can still be unfair. To avoid these shortcomings, we develop a new axiomatic approach that is rooted in principles of fair division. This not only avoids the need to choose a link function, but also more meaningfully quantifies the impact on the items beyond exposure. Our axioms of envy-freeness and dominance over uniform ranking postulate that for a fair ranking policy every item should prefer their own rank allocation over that of any other item, and that no item should be actively disadvantaged by the rankings. To compute ranking policies that are fair according to these axioms, we propose a new ranking objective related to the Nash Social Welfare. We show that the solution has guarantees regarding its envy-freeness, its dominance over uniform rankings for every item, and its Pareto optimality. In contrast, we show that conventional exposure-based fairness can produce large amounts of envy and have a highly disparate impact on the items. Beyond these theoretical results, we illustrate empirically how our framework controls the trade-off between impact-based individual item fairness and user utility.",True,True,"Saito, Yuta and Joachims, Thorsten",2022.0,,,,,"Fair Ranking as Fair Division: Impact-Based Individual Fairness in Ranking",[PDF] Fair Ranking as Fair Division: Impact-Based Individual Fairness in ...,https://www.cs.cornell.edu/people/tj/publications/saito_joachims_22b,Our axioms of envy-freeness and dominance over uniform ranking postulate that for a fair ranking policy every item should prefer their own rank allocation over "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,hanlon2010review,\cite{hanlon2010review},A Review of Tax Research,,,True,False,"Hanlon, Michelle and Heitzman, Shane",2010.0,,,,Journal of accounting and Economics,A Review of Tax Research,"A Review of Tax Research by Michelle Hanlon, Shane Heitzman",https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1476561,"A Review of Tax Research by Michelle Hanlon, Shane Heitzman :: SSRN Hanlon, Michelle and Heitzman, Shane, A Review of Tax Research (July 25, 2010). Allee, Teri Lombardi Yohn The Demand for Financial Statements in an Unregulated Environment: an Examination of the Production and Use of Financial Statements By Privately-Held Small Businesses Pages: 49 Posted: 2 Feb 2005 Last revised: 14 May 2014 Download PDF Add Paper to My Library 4. April Klein, Simone Traini, Georgios Voulgaris Foreign Institutional Investors and Information Asymmetry: Evidence from Corporate Taxes NYU Stern School of Business ·57 Pages ·Posted: 17 Jun 2019 ·Last revised: 26 May 2023 ·Downloads: 373 Download PDF Add Paper to My Library Follow;) #### Corporate Finance: Capital Structure & Payout Policies eJournal" "Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics",2504.14991v1,nerre2001concept,\cite{nerre2001concept},The Concept of Tax Culture,,,True,False,"Nerr{\'e}, Birger",2001.0,,,,,The Concept of Tax Culture,THE CONCEPT OF TAX CULTURE IN CONTEMPORARY TIMES,https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/iusplr13§ion=21,"Accordingly, tax culture is more than ""culture of taxation.""' and ""tax-paying culture"" and studies the motives which impact on voluntary tax compliance," "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,HB,\cite{HB},Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers,,,True,False,"Zadrozny, Bianca and Elkan, Charles",2001.0,,https://dl.acm.org/doi/10.5555/645530.655658,,,Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers,(PDF) Obtaining Calibrated Probability Estimates from Decision ...,https://www.researchgate.net/publication/2368094_Obtaining_Calibrated_Probability_Estimates_from_Decision_Trees_and_Naive_Bayesian_Classifiers,This paper presents simple but successful methods for obtaining calibrated probability estimates from decision tree and naive Bayesian classifiers. "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,MBCT,\cite{MBCT},"MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty Calibration",http://arxiv.org/abs/2202.04348v2,"Most machine learning classifiers only concern classification accuracy, while certain applications (such as medical diagnosis, meteorological forecasting, and computation advertising) require the model to predict the true probability, known as a calibrated estimate. In previous work, researchers have developed several calibration methods to post-process the outputs of a predictor to obtain calibrated values, such as binning and scaling methods. Compared with scaling, binning methods are shown to have distribution-free theoretical guarantees, which motivates us to prefer binning methods for calibration. However, we notice that existing binning methods have several drawbacks: (a) the binning scheme only considers the original prediction values, thus limiting the calibration performance; and (b) the binning approach is non-individual, mapping multiple samples in a bin to the same value, and thus is not suitable for order-sensitive applications. In this paper, we propose a feature-aware binning framework, called Multiple Boosting Calibration Trees (MBCT), along with a multi-view calibration loss to tackle the above issues. Our MBCT optimizes the binning scheme by the tree structures of features, and adopts a linear function in a tree node to achieve individual calibration. Our MBCT is non-monotonic, and has the potential to improve order accuracy, due to its learnable binning scheme and the individual calibration. We conduct comprehensive experiments on three datasets in different fields. Results show that our method outperforms all competing models in terms of both calibration error and order accuracy. We also conduct simulation experiments, justifying that the proposed multi-view calibration loss is a better metric in modeling calibration error.",True,True,"Huang, Siguang and Wang, Yunli and Mou, Lili and Zhang, Huayue and Zhu, Han and Yu, Chuan and Zheng, Bo",2022.0,,https://doi.org/10.1145/3485447.3512096,10.1145/3485447.3512096,,"MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty Calibration",MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty ...,https://dl.acm.org/doi/10.1145/3485447.3512096,"Our MBCT is non-monotonic, and has the potential to improve order accuracy, due to its learnable binning scheme and the individual calibration." "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,IR,\cite{IR},Transforming classifier scores into accurate multiclass probability estimates,,,True,False,"Zadrozny, Bianca and Elkan, Charles",2002.0,,https://doi.org/10.1145/775047.775151,10.1145/775047.775151,,Transforming classifier scores into accurate multiclass probability estimates,(PDF) Transforming Classifier Scores into Accurate Multiclass ...,https://www.researchgate.net/publication/2571315_Transforming_Classifier_Scores_into_Accurate_Multiclass_Probability_Estimates,"Here, we show how to obtain accurate probability estimates for multiclass problems by combining calibrated binary probability estimates." "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,SIR,\cite{SIR},Calibrating User Response Predictions in Online Advertising,,,True,False,"Deng, Chao and Wang, Hao and Tan, Qing and Xu, Jian and Gai, Kun",2020.0,,https://doi.org/10.1007/978-3-030-67667-4_13,10.1007/978-3-030-67667-4_13,,Calibrating User Response Predictions in Online Advertising,Calibrating User Response Predictions in Online Advertising,https://dl.acm.org/doi/abs/10.1007/978-3-030-67667-4_13,"To obtain accurate probability, calibration is usually used to transform predicted probabilities to posterior probabilities." "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,PlattScaling,\cite{PlattScaling},Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,,,True,False,"Platt, John and others",1999.0,,https://home.cs.colorado.edu/~mozer/Teaching/syllabi/6622/papers/Platt1999.pdf,,Advances in large margin classifiers,Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,[PDF] Probabilistic Outputs for Support Vector Machines and Comparisons ...,https://home.cs.colorado.edu/~mozer/Teaching/syllabi/6622/papers/Platt1999.pdf,This chapter compares classification error rate and likelihood scores for an SVM plus sigmoid versus a kernel method trained with a regularized. "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,TemperatureScaling,\cite{TemperatureScaling},Revisiting the Calibration of Modern Neural Networks,http://arxiv.org/abs/2106.07998v2,"Accurate estimation of predictive uncertainty (model calibration) is essential for the safe application of neural networks. Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models produce poorly calibrated predictions. Here, we revisit this question for recent state-of-the-art image classification models. We systematically relate model calibration and accuracy, and find that the most recent models, notably those not using convolutions, are among the best calibrated. Trends observed in prior model generations, such as decay of calibration with distribution shift or model size, are less pronounced in recent architectures. We also show that model size and amount of pretraining do not fully explain these differences, suggesting that architecture is a major determinant of calibration properties.",True,True,"Guo, Chuan and Pleiss, Geoff and Sun, Yu and Weinberger, Kilian Q.",2017.0,,https://dl.acm.org/doi/10.5555/3305381.3305518,,,Revisiting the Calibration of Modern Neural Networks,Revisiting the Calibration of Modern Neural Networks,http://arxiv.org/pdf/2106.07998v2,"Accurate estimation of predictive uncertainty (model calibration) is essential for the safe application of neural networks. Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models produce poorly calibrated predictions. Here, we revisit this question for recent state-of-the-art image classification models. We systematically relate model calibration and accuracy, and find that the most recent models, notably those not using convolutions, are among the best calibrated. Trends observed in prior model generations, such as decay of calibration with distribution shift or model size, are less pronounced in recent architectures. We also show that model size and amount of pretraining do not fully explain these differences, suggesting that architecture is a major determinant of calibration properties." "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,BetaCalib,\cite{BetaCalib},Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers,,,True,False,"Kull, Meelis and Silva Filho, Telmo and Flach, Peter",2017.0,,http://proceedings.mlr.press/v54/kull17a.html,,,Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers,Beta calibration: a well-founded and easily implemented ...,https://research-information.bris.ac.uk/en/publications/beta-calibration-a-well-founded-and-easily-implemented-improvemen,"by M Kull · 2017 · Cited by 281 — Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. Meelis Kull, Telmo De Menezes E Silva" "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,GammaGauss,\cite{GammaGauss},Obtaining Calibrated Probabilities with Personalized Ranking Models,http://arxiv.org/abs/2112.07428v2,"For personalized ranking models, the well-calibrated probability of an item being preferred by a user has great practical value. While existing work shows promising results in image classification, probability calibration has not been much explored for personalized ranking. In this paper, we aim to estimate the calibrated probability of how likely a user will prefer an item. We investigate various parametric distributions and propose two parametric calibration methods, namely Gaussian calibration and Gamma calibration. Each proposed method can be seen as a post-processing function that maps the ranking scores of pre-trained models to well-calibrated preference probabilities, without affecting the recommendation performance. We also design the unbiased empirical risk minimization framework that guides the calibration methods to learning of true preference probability from the biased user-item interaction dataset. Extensive evaluations with various personalized ranking models on real-world datasets show that both the proposed calibration methods and the unbiased empirical risk minimization significantly improve the calibration performance.",True,True,"Kweon, Wonbin and Kang, SeongKu and Yu, Hwanjo",2022.0,,https://aaai.org/papers/04083-obtaining-calibrated-probabilities-with-personalized-ranking-models/,,,Obtaining Calibrated Probabilities with Personalized Ranking Models,Obtaining Calibrated Probabilities with Personalized Ranking Models,http://arxiv.org/pdf/2112.07428v2,"For personalized ranking models, the well-calibrated probability of an item being preferred by a user has great practical value. While existing work shows promising results in image classification, probability calibration has not been much explored for personalized ranking. In this paper, we aim to estimate the calibrated probability of how likely a user will prefer an item. We investigate various parametric distributions and propose two parametric calibration methods, namely Gaussian calibration and Gamma calibration. Each proposed method can be seen as a post-processing function that maps the ranking scores of pre-trained models to well-calibrated preference probabilities, without affecting the recommendation performance. We also design the unbiased empirical risk minimization framework that guides the calibration methods to learning of true preference probability from the biased user-item interaction dataset. Extensive evaluations with various personalized ranking models on real-world datasets show that both the proposed calibration methods and the unbiased empirical risk minimization significantly improve the calibration performance." "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,ConfCalib,\cite{ConfCalib},Confidence-Aware Multi-Field Model Calibration,http://arxiv.org/abs/2402.17655v2,"Accurately predicting the probabilities of user feedback, such as clicks and conversions, is critical for advertisement ranking and bidding. However, there often exist unwanted mismatches between predicted probabilities and true likelihoods due to the rapid shift of data distributions and intrinsic model biases. Calibration aims to address this issue by post-processing model predictions, and field-aware calibration can adjust model output on different feature field values to satisfy fine-grained advertising demands. Unfortunately, the observed samples corresponding to certain field values can be seriously limited to make confident calibrations, which may yield bias amplification and online disturbance. In this paper, we propose a confidence-aware multi-field calibration method, which adaptively adjusts the calibration intensity based on confidence levels derived from sample statistics. It also utilizes multiple fields for joint model calibration according to their importance to mitigate the impact of data sparsity on a single field. Extensive offline and online experiments show the superiority of our method in boosting advertising performance and reducing prediction deviations.",True,True,"Zhao, Yuang and Wu, Chuhan and Jia, Qinglin and Zhu, Hong and Yan, Jia and Zong, Libin and Zhang, Linxuan and Dong, Zhenhua and Zhang, Muyu",2024.0,,https://doi.org/10.1145/3627673.3680043,10.1145/3627673.3680043,,Confidence-Aware Multi-Field Model Calibration,[PDF] Confidence-Aware Multi-Field Model Calibration - arXiv,https://arxiv.org/pdf/2402.17655,"In this paper, we propose a confidence-aware multi-field calibration method, which adaptively adjusts the calibration intensity based on confidence levels" "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,LiRank,\cite{LiRank},LiRank: Industrial Large Scale Ranking Models at LinkedIn,http://arxiv.org/abs/2402.06859v2,"We present LiRank, a large-scale ranking framework at LinkedIn that brings to production state-of-the-art modeling architectures and optimization methods. We unveil several modeling improvements, including Residual DCN, which adds attention and residual connections to the famous DCNv2 architecture. We share insights into combining and tuning SOTA architectures to create a unified model, including Dense Gating, Transformers and Residual DCN. We also propose novel techniques for calibration and describe how we productionalized deep learning based explore/exploit methods. To enable effective, production-grade serving of large ranking models, we detail how to train and compress models using quantization and vocabulary compression. We provide details about the deployment setup for large-scale use cases of Feed ranking, Jobs Recommendations, and Ads click-through rate (CTR) prediction. We summarize our learnings from various A/B tests by elucidating the most effective technical approaches. These ideas have contributed to relative metrics improvements across the board at LinkedIn: +0.5% member sessions in the Feed, +1.76% qualified job applications for Jobs search and recommendations, and +4.3% for Ads CTR. We hope this work can provide practical insights and solutions for practitioners interested in leveraging large-scale deep ranking systems.",True,True,"Borisyuk, Fedor and Zhou, Mingzhou and Song, Qingquan and Zhu, Siyu and Tiwana, Birjodh and Parameswaran, Ganesh and Dangi, Siddharth and Hertel, Lars and Xiao, Qiang Charles and Hou, Xiaochen and Ouyang, Yunbo and Gupta, Aman and Singh, Sheallika and Liu, Dan and Cheng, Hailing and Le, Lei and Hung, Jonathan and Keerthi, Sathiya and Wang, Ruoyan and Zhang, Fengyu and Kothari, Mohit and Zhu, Chen and Sun, Daqi and Dai, Yun and Luan, Xun and Zhu, Sirou and Wang, Zhiwei and Daftary, Neil and Shen, Qianqi and Jiang, Chengming and Wei, Haichao and Varshney, Maneesh and Ghoting, Amol and Ghosh, Souvik",2024.0,,https://doi.org/10.1145/3637528.3671561,10.1145/3637528.3671561,,LiRank: Industrial Large Scale Ranking Models at LinkedIn,LiRank: Industrial Large Scale Ranking Models at LinkedIn,http://arxiv.org/pdf/2402.06859v2,"We present LiRank, a large-scale ranking framework at LinkedIn that brings to production state-of-the-art modeling architectures and optimization methods. We unveil several modeling improvements, including Residual DCN, which adds attention and residual connections to the famous DCNv2 architecture. We share insights into combining and tuning SOTA architectures to create a unified model, including Dense Gating, Transformers and Residual DCN. We also propose novel techniques for calibration and describe how we productionalized deep learning based explore/exploit methods. To enable effective, production-grade serving of large ranking models, we detail how to train and compress models using quantization and vocabulary compression. We provide details about the deployment setup for large-scale use cases of Feed ranking, Jobs Recommendations, and Ads click-through rate (CTR) prediction. We summarize our learnings from various A/B tests by elucidating the most effective technical approaches. These ideas have contributed to relative metrics improvements across the board at LinkedIn: +0.5% member sessions in the Feed, +1.76% qualified job applications for Jobs search and recommendations, and +4.3% for Ads CTR. We hope this work can provide practical insights and solutions for practitioners interested in leveraging large-scale deep ranking systems." "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,NeuralCalib,\cite{NeuralCalib},"Field-aware Calibration: A Simple and Empirically Strong Method for Reliable Probabilistic Predictions",http://arxiv.org/abs/1905.10713v3,"It is often observed that the probabilistic predictions given by a machine learning model can disagree with averaged actual outcomes on specific subsets of data, which is also known as the issue of miscalibration. It is responsible for the unreliability of practical machine learning systems. For example, in online advertising, an ad can receive a click-through rate prediction of 0.1 over some population of users where its actual click rate is 0.15. In such cases, the probabilistic predictions have to be fixed before the system can be deployed. In this paper, we first introduce a new evaluation metric named field-level calibration error that measures the bias in predictions over the sensitive input field that the decision-maker concerns. We show that existing post-hoc calibration methods have limited improvements in the new field-level metric and other non-calibration metrics such as the AUC score. To this end, we propose Neural Calibration, a simple yet powerful post-hoc calibration method that learns to calibrate by making full use of the field-aware information over the validation set. We present extensive experiments on five large-scale datasets. The results showed that Neural Calibration significantly improves against uncalibrated predictions in common metrics such as the negative log-likelihood, Brier score and AUC, as well as the proposed field-level calibration error.",True,True,"Pan, Feiyang and Ao, Xiang and Tang, Pingzhong and Lu, Min and Liu, Dapeng and Xiao, Lei and He, Qing",2020.0,,https://doi.org/10.1145/3366423.3380154,10.1145/3366423.3380154,,"Field-aware Calibration: A Simple and Empirically Strong Method for Reliable Probabilistic Predictions",Field-aware Calibration-A Simple and Empirically Strong Method for ...,https://zhuanlan.zhihu.com/p/527521112,... Reliable Probabilistic Prediction ... Field-aware Calibration- A Simple and Empirically Strong Method for Reliable Probabilistic Predictions. "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,AdaCalib,\cite{AdaCalib},"Posterior Probability Matters: Doubly-Adaptive Calibration for Neural Predictions in Online Advertising",http://arxiv.org/abs/2205.07295v2,"Predicting user response probabilities is vital for ad ranking and bidding. We hope that predictive models can produce accurate probabilistic predictions that reflect true likelihoods. Calibration techniques aim to post-process model predictions to posterior probabilities. Field-level calibration -- which performs calibration w.r.t. to a specific field value -- is fine-grained and more practical. In this paper we propose a doubly-adaptive approach AdaCalib. It learns an isotonic function family to calibrate model predictions with the guidance of posterior statistics, and field-adaptive mechanisms are designed to ensure that the posterior is appropriate for the field value to be calibrated. Experiments verify that AdaCalib achieves significant improvement on calibration performance. It has been deployed online and beats previous approach.",True,True,"Wei, Penghui and Zhang, Weimin and Hou, Ruijie and Liu, Jinquan and Liu, Shaoguo and Wang, Liang and Zheng, Bo",2022.0,,https://doi.org/10.1145/3477495.3531911,10.1145/3477495.3531911,,"Posterior Probability Matters: Doubly-Adaptive Calibration for Neural Predictions in Online Advertising",Posterior Probability Matters: Doubly-Adaptive Calibration ...,https://www.researchgate.net/publication/360640754_Posterior_Probability_Matters_Doubly-Adaptive_Calibration_for_Neural_Predictions_in_Online_Advertising,In this paper we propose a doubly-adaptive approach AdaCalib. It learns an isotonic function family to calibrate model predictions with the "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,SBCR,\cite{SBCR},A Self-boosted Framework for Calibrated Ranking,http://arxiv.org/abs/2406.08010v1,"Scale-calibrated ranking systems are ubiquitous in real-world applications nowadays, which pursue accurate ranking quality and calibrated probabilistic predictions simultaneously. For instance, in the advertising ranking system, the predicted click-through rate (CTR) is utilized for ranking and required to be calibrated for the downstream cost-per-click ads bidding. Recently, multi-objective based methods have been wildly adopted as a standard approach for Calibrated Ranking, which incorporates the combination of two loss functions: a pointwise loss that focuses on calibrated absolute values and a ranking loss that emphasizes relative orderings. However, when applied to industrial online applications, existing multi-objective CR approaches still suffer from two crucial limitations. First, previous methods need to aggregate the full candidate list within a single mini-batch to compute the ranking loss. Such aggregation strategy violates extensive data shuffling which has long been proven beneficial for preventing overfitting, and thus degrades the training effectiveness. Second, existing multi-objective methods apply the two inherently conflicting loss functions on a single probabilistic prediction, which results in a sub-optimal trade-off between calibration and ranking. To tackle the two limitations, we propose a Self-Boosted framework for Calibrated Ranking (SBCR).",True,True,"Zhang, Shunyu and Liu, Hu and Bao, Wentian and Yu, Enyun and Song, Yang",2024.0,,https://doi.org/10.1145/3637528.3671570,10.1145/3637528.3671570,,A Self-boosted Framework for Calibrated Ranking,A Self-boosted Framework for Calibrated Ranking,https://arxiv.org/html/2406.08010v1,"We propose a Self-Boosted framework for Calibrated Ranking (SBCR). In SBCR, the predicted ranking scores by the online deployed model are dumped into context" "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,error,\cite{error},"On the error of linear interpolation and the orientation, aspect ratio, and internal angles of a triangle",,,True,False,"Cao, Weiming",2005.0,,https://dl.acm.org/doi/abs/10.1137/S0036142903433492,,SIAM journal on numerical analysis,"On the error of linear interpolation and the orientation, aspect ratio, and internal angles of a triangle",Quirk in VertexColors interpolation when displaying Polygon,https://mathematica.stackexchange.com/questions/16168/quirk-in-vertexcolors-interpolation-when-displaying-polygon,"The best general way to deal with this is to (1) triangulate your large polygon (2) assign vertex colors to the newly introduced vertices (could be tricky, in" "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,DESC,\cite{DESC},"Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising",http://arxiv.org/abs/2401.09507v2,"In the e-commerce advertising scenario, estimating the true probabilities (known as a calibrated estimate) on Click-Through Rate (CTR) and Conversion Rate (CVR) is critical. Previous research has introduced numerous solutions for addressing the calibration problem. These methods typically involve the training of calibrators using a validation set and subsequently applying these calibrators to correct the original estimated values during online inference. However, what sets e-commerce advertising scenarios apart is the challenge of multi-field calibration. Multi-field calibration requires achieving calibration in each field. In order to achieve multi-field calibration, it is necessary to have a strong data utilization ability. Because the quantity of pCTR specified range for a single field-value (such as user ID and item ID) sample is relatively small, this makes the calibrator more difficult to train. However, existing methods have difficulty effectively addressing these issues. To solve these problems, we propose a new method named Deep Ensemble Shape Calibration (DESC). In terms of business understanding and interpretability, we decompose multi-field calibration into value calibration and shape calibration. We introduce innovative basis calibration functions, which enhance both function expression capabilities and data utilization by combining these basis calibration functions. A significant advancement lies in the development of an allocator capable of allocating the most suitable calibrators to different estimation error distributions within diverse fields and values. We achieve significant improvements in both public and industrial datasets. In online experiments, we observe a +2.5% increase in CVR and +4.0% in GMV (Gross Merchandise Volume). Our code is now available at: https://github.com/HaoYang0123/DESC.",True,True,"Yang, Shuai and Yang, Hao and Zou, Zhuang and Xu, Linhe and Yuan, Shuo and Zeng, Yifan",2024.0,,https://doi.org/10.1145/3637528.3671529,10.1145/3637528.3671529,,"Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising",Multi-Field Post-hoc Calibration in Online Advertising - arXiv,https://arxiv.org/abs/2401.09507,"Image 4: arxiv logo>cs> arXiv:2401.09507 Title:Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising View a PDF of the paper titled Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising, by Shuai Yang and 5 other authors View a PDF of the paper titled Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising, by Shuai Yang and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle " "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,ScaleCalib,\cite{ScaleCalib},Scale Calibration of Deep Ranking Models,,,True,False,"Yan, Le and Qin, Zhen and Wang, Xuanhui and Bendersky, Michael and Najork, Marc",2022.0,,https://doi.org/10.1145/3534678.3539072,10.1145/3534678.3539072,,Scale Calibration of Deep Ranking Models,Scale Calibration of Deep Ranking Models - Google Research,https://research.google/pubs/scale-calibration-of-deep-ranking-models/,"Scale Calibration of Deep Ranking Models Research Research Back to Research areas menu Back to Research areas menu Back to Computing Systems & Quantum AI menu Back to Programs & events menu Scale Calibration of Deep Ranking Models Learning-to-Rank (LTR) systems are ubiquitous in web applications nowadays. However, virtually all advanced ranking functions are not scale calibrated. This is a major reason that existing ads ranking methods use scale calibrated pointwise loss functions that may sacrifice ranking performance. Our results show that our proposed calibrated ranking losses can achieve nearly optimal results in terms of both ranking quality and score scale calibration. Learn more about how we conduct our research Our research philosophy" "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,JRC,\cite{JRC},"Joint Optimization of Ranking and Calibration with Contextualized Hybrid Model",http://arxiv.org/abs/2208.06164v2,"Despite the development of ranking optimization techniques, pointwise loss remains the dominating approach for click-through rate prediction. It can be attributed to the calibration ability of the pointwise loss since the prediction can be viewed as the click probability. In practice, a CTR prediction model is also commonly assessed with the ranking ability. To optimize the ranking ability, ranking loss (e.g., pairwise or listwise loss) can be adopted as they usually achieve better rankings than pointwise loss. Previous studies have experimented with a direct combination of the two losses to obtain the benefit from both losses and observed an improved performance. However, previous studies break the meaning of output logit as the click-through rate, which may lead to sub-optimal solutions. To address this issue, we propose an approach that can Jointly optimize the Ranking and Calibration abilities (JRC for short). JRC improves the ranking ability by contrasting the logit value for the sample with different labels and constrains the predicted probability to be a function of the logit subtraction. We further show that JRC consolidates the interpretation of logits, where the logits model the joint distribution. With such an interpretation, we prove that JRC approximately optimizes the contextualized hybrid discriminative-generative objective. Experiments on public and industrial datasets and online A/B testing show that our approach improves both ranking and calibration abilities. Since May 2022, JRC has been deployed on the display advertising platform of Alibaba and has obtained significant performance improvements.",True,True,"Sheng, Xiang-Rong and Gao, Jingyue and Cheng, Yueyao and Yang, Siran and Han, Shuguang and Deng, Hongbo and Jiang, Yuning and Xu, Jian and Zheng, Bo",2023.0,,https://doi.org/10.1145/3580305.3599851,10.1145/3580305.3599851,,"Joint Optimization of Ranking and Calibration with Contextualized Hybrid Model",[PDF] Joint Optimization of Ranking and Calibration with Contextualized ...,https://arxiv.org/pdf/2208.06164,The proposed JRC method extends the idea of hybrid modeling with contextualization for CTR prediction. Incorporating context information further enables our. "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,RCR,\cite{RCR},"Regression Compatible Listwise Objectives for Calibrated Ranking with Binary Relevance",http://arxiv.org/abs/2211.01494v2,"As Learning-to-Rank (LTR) approaches primarily seek to improve ranking quality, their output scores are not scale-calibrated by design. This fundamentally limits LTR usage in score-sensitive applications. Though a simple multi-objective approach that combines a regression and a ranking objective can effectively learn scale-calibrated scores, we argue that the two objectives are not necessarily compatible, which makes the trade-off less ideal for either of them. In this paper, we propose a practical regression compatible ranking (RCR) approach that achieves a better trade-off, where the two ranking and regression components are proved to be mutually aligned. Although the same idea applies to ranking with both binary and graded relevance, we mainly focus on binary labels in this paper. We evaluate the proposed approach on several public LTR benchmarks and show that it consistently achieves either best or competitive result in terms of both regression and ranking metrics, and significantly improves the Pareto frontiers in the context of multi-objective optimization. Furthermore, we evaluated the proposed approach on YouTube Search and found that it not only improved the ranking quality of the production pCTR model, but also brought gains to the click prediction accuracy. The proposed approach has been successfully deployed in the YouTube production system.",True,True,"Bai, Aijun and Jagerman, Rolf and Qin, Zhen and Yan, Le and Kar, Pratyush and Lin, Bing-Rong and Wang, Xuanhui and Bendersky, Michael and Najork, Marc",2023.0,,https://doi.org/10.1145/3583780.3614712,10.1145/3583780.3614712,,"Regression Compatible Listwise Objectives for Calibrated Ranking with Binary Relevance",[PDF] Regression Compatible Listwise Objectives for Calibrated Ranking ...,https://arxiv.org/pdf/2211.01494,"In this paper, we propose a practical regression compatible ranking (RCR) approach where the two ranking and regression components are proved to be mutually" "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,CLID,\cite{CLID},"Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction",http://arxiv.org/abs/2312.08727v1,"In machine learning systems, privileged features refer to the features that are available during offline training but inaccessible for online serving. Previous studies have recognized the importance of privileged features and explored ways to tackle online-offline discrepancies. A typical practice is privileged features distillation (PFD): train a teacher model using all features (including privileged ones) and then distill the knowledge from the teacher model using a student model (excluding the privileged features), which is then employed for online serving. In practice, the pointwise cross-entropy loss is often adopted for PFD. However, this loss is insufficient to distill the ranking ability for CTR prediction. First, it does not consider the non-i.i.d. characteristic of the data distribution, i.e., other items on the same page significantly impact the click probability of the candidate item. Second, it fails to consider the relative item order ranked by the teacher model's predictions, which is essential to distill the ranking ability. To address these issues, we first extend the pointwise-based PFD to the listwise-based PFD. We then define the calibration-compatible property of distillation loss and show that commonly used listwise losses do not satisfy this property when employed as distillation loss, thus compromising the model's calibration ability, which is another important measure for CTR prediction. To tackle this dilemma, we propose Calibration-compatible LIstwise Distillation (CLID), which employs carefully-designed listwise distillation loss to achieve better ranking ability than the pointwise-based PFD while preserving the model's calibration ability. We theoretically prove it is calibration-compatible. Extensive experiments on public datasets and a production dataset collected from the display advertising system of Alibaba further demonstrate the effectiveness of CLID.",True,True,"Gui, Xiaoqiang and Cheng, Yueyao and Sheng, Xiang-Rong and Zhao, Yunfeng and Yu, Guoxian and Han, Shuguang and Jiang, Yuning and Xu, Jian and Zheng, Bo",2024.0,,https://doi.org/10.1145/3616855.3635810,10.1145/3616855.3635810,,"Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction",[PDF] Calibration-compatible Listwise Distillation of Privileged Features for ...,https://arxiv.org/pdf/2312.08727,"In the ranking stage, a CTR prediction model typically takes the user's features and candidate items' features as input. The model then predicts." "Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems",2504.14243v1,BBP,\cite{BBP},Beyond Binary Preference: Leveraging Bayesian Approaches for Joint Optimization of Ranking and Calibration,,,True,False,"Liu, Chang and Wang, Qiwei and Lin, Wenqing and Ding, Yue and Lu, Hongtao",2024.0,,https://doi.org/10.1145/3637528.3671577,10.1145/3637528.3671577,,Beyond Binary Preference: Leveraging Bayesian Approaches for Joint Optimization of Ranking and Calibration,Leveraging Bayesian Approaches for Joint Optimization of Ranking ...,https://www.researchgate.net/publication/383420396_Beyond_Binary_Preference_Leveraging_Bayesian_Approaches_for_Joint_Optimization_of_Ranking_and_Calibration,"BBP [28] tackles the issue of insufficient samples for ranking loss by estimating beta distributions for users and items, generating continuously comparable" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,PORGraph,\cite{PORGraph},"Hierarchical Fashion Graph Network for Personalized Outfit Recommendation",http://arxiv.org/abs/2005.12566v1,"Fashion outfit recommendation has attracted increasing attentions from online shopping services and fashion communities.Distinct from other scenarios (e.g., social networking or content sharing) which recommend a single item (e.g., a friend or picture) to a user, outfit recommendation predicts user preference on a set of well-matched fashion items.Hence, performing high-quality personalized outfit recommendation should satisfy two requirements -- 1) the nice compatibility of fashion items and 2) the consistence with user preference. However, present works focus mainly on one of the requirements and only consider either user-outfit or outfit-item relationships, thereby easily leading to suboptimal representations and limiting the performance. In this work, we unify two tasks, fashion compatibility modeling and personalized outfit recommendation. Towards this end, we develop a new framework, Hierarchical Fashion Graph Network(HFGN), to model relationships among users, items, and outfits simultaneously. In particular, we construct a hierarchical structure upon user-outfit interactions and outfit-item mappings. We then get inspirations from recent graph neural networks, and employ the embedding propagation on such hierarchical graph, so as to aggregate item information into an outfit representation, and then refine a user's representation via his/her historical outfits. Furthermore, we jointly train these two tasks to optimize these representations. To demonstrate the effectiveness of HFGN, we conduct extensive experiments on a benchmark dataset, and HFGN achieves significant improvements over the state-of-the-art compatibility matching models like NGNN and outfit recommenders like FHN.",True,True,"Xingchen Li and Xiang Wang and Xiangnan He and Long Chen and Jun Xiao and Tat{-}Seng Chua",2020.0,,,,,"Hierarchical Fashion Graph Network for Personalized Outfit Recommendation",xcppy/hierarchical_fashion_graph_network - GitHub,https://github.com/xcppy/hierarchical_fashion_graph_network,Hierarchical Fashion Graph Network (HFGN) is a new recommendation framework for personalized outfit recommendation task based on hierarchical graph structure. "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,PORAnchors,\cite{PORAnchors},Personalized Outfit Recommendation With Learnable Anchors,,,True,False,"Zhi Lu and Yang Hu and Yan Chen and Bing Zeng",2021.0,,,,,Personalized Outfit Recommendation With Learnable Anchors,[PDF] Personalized Outfit Recommendation With Learnable Anchors,https://openaccess.thecvf.com/content/CVPR2021/papers/Lu_Personalized_Outfit_Recommendation_With_Learnable_Anchors_CVPR_2021_paper.pdf,"The fashion recommendation task, which is based on fashion compatibility learning, is to pre- dict whether a set of fashion items are well matched. In." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,A-FKG,\cite{A-FKG},"{\textdollar}A{\^{}}3{\textdollar}-FKG: Attentive Attribute-Aware Fashion Knowledge Graph for Outfit Preference Prediction",,,True,False,"Huijing Zhan and Jie Lin and Kenan Emir Ak and Boxin Shi and Ling{-}Yu Duan and Alex C. Kot",2022.0,,,,{IEEE} Trans. Multim.,"{\textdollar}A{\^{}}3{\textdollar}-FKG: Attentive Attribute-Aware Fashion Knowledge Graph for Outfit Preference Prediction",[PDF] -FKG: Attentive Attribute-Aware Fashion Knowledge Graph for Outfit ...,http://www.jdl.link/doc/2011/20211231_Zhan_TMM21.pdf,"In this paper, we address the task of personalized outfit preference prediction via a novel Attentive Attribute-Aware Fashion Knowledge Graph. (A3-FKG), which" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,FashionRecSurvey-23,\cite{FashionRecSurvey-23},Computational Technologies for Fashion Recommendation: A Survey,http://arxiv.org/abs/2306.03395v2,"Fashion recommendation is a key research field in computational fashion research and has attracted considerable interest in the computer vision, multimedia, and information retrieval communities in recent years. Due to the great demand for applications, various fashion recommendation tasks, such as personalized fashion product recommendation, complementary (mix-and-match) recommendation, and outfit recommendation, have been posed and explored in the literature. The continuing research attention and advances impel us to look back and in-depth into the field for a better understanding. In this paper, we comprehensively review recent research efforts on fashion recommendation from a technological perspective. We first introduce fashion recommendation at a macro level and analyse its characteristics and differences with general recommendation tasks. We then clearly categorize different fashion recommendation efforts into several sub-tasks and focus on each sub-task in terms of its problem formulation, research focus, state-of-the-art methods, and limitations. We also summarize the datasets proposed in the literature for use in fashion recommendation studies to give readers a brief illustration. Finally, we discuss several promising directions for future research in this field. Overall, this survey systematically reviews the development of fashion recommendation research. It also discusses the current limitations and gaps between academic research and the real needs of the fashion industry. In the process, we offer a deep insight into how the fashion industry could benefit from the computational technologies of fashion recommendation.",True,True,"Yujuan Ding and Zhihui Lai and P. Y. Mok and Tat{-}Seng Chua",2024.0,,,,{ACM} Comput. Surv.,Computational Technologies for Fashion Recommendation: A Survey,Computational Technologies for Fashion Recommendation: A Survey,http://arxiv.org/pdf/2306.03395v2,"Fashion recommendation is a key research field in computational fashion research and has attracted considerable interest in the computer vision, multimedia, and information retrieval communities in recent years. Due to the great demand for applications, various fashion recommendation tasks, such as personalized fashion product recommendation, complementary (mix-and-match) recommendation, and outfit recommendation, have been posed and explored in the literature. The continuing research attention and advances impel us to look back and in-depth into the field for a better understanding. In this paper, we comprehensively review recent research efforts on fashion recommendation from a technological perspective. We first introduce fashion recommendation at a macro level and analyse its characteristics and differences with general recommendation tasks. We then clearly categorize different fashion recommendation efforts into several sub-tasks and focus on each sub-task in terms of its problem formulation, research focus, state-of-the-art methods, and limitations. We also summarize the datasets proposed in the literature for use in fashion recommendation studies to give readers a brief illustration. Finally, we discuss several promising directions for future research in this field. Overall, this survey systematically reviews the development of fashion recommendation research. It also discusses the current limitations and gaps between academic research and the real needs of the fashion industry. In the process, we offer a deep insight into how the fashion industry could benefit from the computational technologies of fashion recommendation." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,personalCom,\cite{personalCom},Personalized Capsule Wardrobe Creation with Garment and User Modeling,,,True,False,"Xue Dong and Xuemeng Song and Fuli Feng and Peiguang Jing and Xin{-}Shun Xu and Liqiang Nie",2019.0,,,,,Personalized Capsule Wardrobe Creation with Garment and User Modeling,Personalized Capsule Wardrobe Creation with Garment ...,https://www.researchgate.net/publication/336708181_Personalized_Capsule_Wardrobe_Creation_with_Garment_and_User_Modeling,"[14] introduce a combinatorial optimization based personalized capsule wardrobe creation framework, which jointly integrates user modeling and garment modeling." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,PFOG,\cite{PFOG},Personalized fashion outfit generation with user coordination preference learning,,,True,False,"Yujuan Ding and P. Y. Mok and Yunshan Ma and Yi Bin",2023.0,,,,Inf. Process. Manag.,Personalized fashion outfit generation with user coordination preference learning,Personalized fashion outfit generation with user coordination ...,https://www.sciencedirect.com/science/article/pii/S0306457323001711,"Personalized fashion outfit generation with user coordination preference learning - ScienceDirect Personalized fashion outfit generation with user coordination preference learning Fashion outfit recommendation, aiming to model personal preference of users on outfits, is one of the most widely studied outfit-related tasks. In contrast, the task of fashion outfit generation (Bettaney et al., 2021, Li et al., 2019, Lorbert et al., 2021, Madan et al., 2021) specifically focuses on the generation process of fashion outfits based on individual items, while neglects user preferences, making the generated outfits less attractive to users. This paper addressed the personalized outfit generation problem by introducing user coordination preference, which refers to the template preference that users have when combining different categories of fashion items." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,POG,\cite{POG},"POG: Personalized Outfit Generation for Fashion Recommendation at Alibaba iFashion",http://arxiv.org/abs/1905.01866v3,"Increasing demand for fashion recommendation raises a lot of challenges for online shopping platforms and fashion communities. In particular, there exist two requirements for fashion outfit recommendation: the Compatibility of the generated fashion outfits, and the Personalization in the recommendation process. In this paper, we demonstrate these two requirements can be satisfied via building a bridge between outfit generation and recommendation. Through large data analysis, we observe that people have similar tastes in individual items and outfits. Therefore, we propose a Personalized Outfit Generation (POG) model, which connects user preferences regarding individual items and outfits with Transformer architecture. Extensive offline and online experiments provide strong quantitative evidence that our method outperforms alternative methods regarding both compatibility and personalization metrics. Furthermore, we deploy POG on a platform named Dida in Alibaba to generate personalized outfits for the users of the online application iFashion. This work represents a first step towards an industrial-scale fashion outfit generation and recommendation solution, which goes beyond generating outfits based on explicit queries, or merely recommending from existing outfit pools. As part of this work, we release a large-scale dataset consisting of 1.01 million outfits with rich context information, and 0.28 billion user click actions from 3.57 million users. To the best of our knowledge, this dataset is the largest, publicly available, fashion related dataset, and the first to provide user behaviors relating to both outfits and fashion items.",True,True,"Wen Chen and Pipei Huang and Jiaming Xu and Xin Guo and Cheng Guo and Fei Sun and Chao Li and Andreas Pfadler and Huan Zhao and Binqiang Zhao",2019.0,,,,,"POG: Personalized Outfit Generation for Fashion Recommendation at Alibaba iFashion",iFashion Alibaba Dataset - Papers With Code,https://paperswithcode.com/dataset/ifashion-alibaba-pog,"in POG: Personalized Outfit Generation for Fashion Recommendation at Alibaba iFashion. 1. 1.01 million outfits, 583K fashion items, with context information." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,MultiCBR,\cite{MultiCBR},MultiCBR: Multi-view Contrastive Learning for Bundle Recommendation,http://arxiv.org/abs/2311.16751v3,"Bundle recommendation seeks to recommend a bundle of related items to users to improve both user experience and the profits of platform. Existing bundle recommendation models have progressed from capturing only user-bundle interactions to the modeling of multiple relations among users, bundles and items. CrossCBR, in particular, incorporates cross-view contrastive learning into a two-view preference learning framework, significantly improving SOTA performance. It does, however, have two limitations: 1) the two-view formulation does not fully exploit all the heterogeneous relations among users, bundles and items; and 2) the ""early contrast and late fusion"" framework is less effective in capturing user preference and difficult to generalize to multiple views. In this paper, we present MultiCBR, a novel Multi-view Contrastive learning framework for Bundle Recommendation. First, we devise a multi-view representation learning framework capable of capturing all the user-bundle, user-item and bundle-item relations, especially better utilizing the bundle-item affiliations to enhance sparse bundles' representations. Second, we innovatively adopt an ""early fusion and late contrast"" design that first fuses the multi-view representations before performing self-supervised contrastive learning. In comparison to existing approaches, our framework reverses the order of fusion and contrast, introducing the following advantages: 1)our framework is capable of modeling both cross-view and ego-view preferences, allowing us to achieve enhanced user preference modeling; and 2) instead of requiring quadratic number of cross-view contrastive losses, we only require two self-supervised contrastive losses, resulting in minimal extra costs. Experimental results on three public datasets indicate that our method outperforms SOTA methods.",True,True,"Yunshan Ma and Yingzhi He and Xiang Wang and Yinwei Wei and Xiaoyu Du and Yuyangzi Fu and Tat{-}Seng Chua",2024.0,,,,{ACM} Trans. Inf. Syst.,MultiCBR: Multi-view Contrastive Learning for Bundle Recommendation,Multi-view Contrastive Learning for Bundle Recommendation,https://dl.acm.org/doi/10.1145/3640810,"In this article, we present MultiCBR, a novel Multi-view Contrastive learning framework for Bundle Recommendation. First, we devise a multi-view representation" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,EBRec,\cite{EBRec},Enhancing Item-level Bundle Representation for Bundle Recommendation,http://arxiv.org/abs/2311.16892v1,"Bundle recommendation approaches offer users a set of related items on a particular topic. The current state-of-the-art (SOTA) method utilizes contrastive learning to learn representations at both the bundle and item levels. However, due to the inherent difference between the bundle-level and item-level preferences, the item-level representations may not receive sufficient information from the bundle affiliations to make accurate predictions. In this paper, we propose a novel approach EBRec, short of Enhanced Bundle Recommendation, which incorporates two enhanced modules to explore inherent item-level bundle representations. First, we propose to incorporate the bundle-user-item (B-U-I) high-order correlations to explore more collaborative information, thus to enhance the previous bundle representation that solely relies on the bundle-item affiliation information. Second, we further enhance the B-U-I correlations by augmenting the observed user-item interactions with interactions generated from pre-trained models, thus improving the item-level bundle representations. We conduct extensive experiments on three public datasets, and the results justify the effectiveness of our approach as well as the two core modules. Codes and datasets are available at https://github.com/answermycode/EBRec.",True,True,"Du, Xiaoyu and Qian, Kun and Ma, Yunshan and Xiang, Xinguang",2023.0,,,,ACM Transactions on Recommender Systems,Enhancing Item-level Bundle Representation for Bundle Recommendation,Enhancing Item-level Bundle Representation ... - ACM Digital Library,https://dl.acm.org/doi/10.1145/3637067,"In this article, we propose a novel approach, Enhanced Bundle Recommendation (EBRec), which incorporates two enhanced modules to explore inherent item-level" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,BundleMLLM,\cite{BundleMLLM},Fine-tuning Multimodal Large Language Models for Product Bundling,http://arxiv.org/abs/2407.11712v4,"Recent advances in product bundling have leveraged multimodal information through sophisticated encoders, but remain constrained by limited semantic understanding and a narrow scope of knowledge. Therefore, some attempts employ In-context Learning (ICL) to explore the potential of large language models (LLMs) for their extensive knowledge and complex reasoning abilities. However, these efforts are inadequate in understanding mulitmodal data and exploiting LLMs' knowledge for product bundling. To bridge the gap, we introduce Bundle-MLLM, a novel framework that fine-tunes LLMs through a hybrid item tokenization approach within a well-designed optimization strategy. Specifically, we integrate textual, media, and relational data into a unified tokenization, introducing a soft separation token to distinguish between textual and non-textual tokens. Additionally, a streamlined yet powerful multimodal fusion module is employed to embed all non-textual features into a single, informative token, significantly boosting efficiency. To tailor product bundling tasks for LLMs, we reformulate the task as a multiple-choice question with candidate items as options. We further propose a progressive optimization strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle patterns and 2) enhancing multimodal semantic understanding specific to product bundling. Extensive experiments on four datasets across two domains demonstrate that our approach outperforms a range of state-of-the-art (SOTA) methods.",True,True,"Xiaohao Liu and Jie Wu and Zhulin Tao and Yunshan Ma and Yinwei Wei and Tat{-}Seng Chua",2025.0,,,,,Fine-tuning Multimodal Large Language Models for Product Bundling,Fine-tuning Multimodal Large Language Models for Product Bundling,https://arxiv.org/abs/2407.11712,"View a PDF of the paper titled Fine-tuning Multimodal Large Language Models for Product Bundling, by Xiaohao Liu and 5 other authors We further propose a progressive optimization strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle patterns and 2) enhancing multimodal semantic understanding specific to product bundling. View a PDF of the paper titled Fine-tuning Multimodal Large Language Models for Product Bundling, by Xiaohao Liu and 5 other authors [x] Bibliographic Explorer Toggle [x] Connected Papers Toggle [x] Litmaps Toggle [x] scite.ai Toggle [x] alphaXiv Toggle [x] Links to Code Toggle [x] DagsHub Toggle [x] GotitPub Toggle [x] Huggingface Toggle [x] Links to Code Toggle [x] ScienceCast Toggle [x] Replicate Toggle [x] Spaces Toggle [x] Spaces Toggle [x] Core recommender toggle " "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,SD,\cite{SD},High-Resolution Image Synthesis with Latent Diffusion Models,,,True,False,"Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Bj{\""{o}}rn Ommer",2022.0,,,,,High-Resolution Image Synthesis with Latent Diffusion Models,[PDF] High-Resolution Image Synthesis With Latent Diffusion Models,https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf,"High-Resolution Image Synthesis with Latent Diffusion Models Robin Rombach1 ∗ Andreas Blattmann1 ∗ Dominik Lorenz1 Patrick Esser Bj¨ orn Ommer1 1Ludwig Maximilian University of Munich & IWR, Heidelberg University, Germany Runway ML https://github.com/CompVis/latent-diffusion Abstract By decomposing the image formation process into a se-quential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Our latent diffusion models (LDMs) achieve new state of the art scores for im-age inpainting and class-conditional image synthesis and highly competitive performance on various tasks, includ-ing unconditional image generation, text-to-image synthe-sis, and super-resolution, while significantly reducing com-putational requirements compared to pixel-based DMs. 1." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,controlNet,\cite{controlNet},Adding Conditional Control to Text-to-Image Diffusion Models,http://arxiv.org/abs/2302.05543v3,"We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with ""zero convolutions"" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.",True,True,"Lvmin Zhang and Anyi Rao and Maneesh Agrawala",2023.0,,,,,Adding Conditional Control to Text-to-Image Diffusion Models,[PDF] Adding Conditional Control to Text-to-Image Diffusion Models,https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_Adding_Conditional_Control_to_Text-to-Image_Diffusion_Models_ICCV_2023_paper.pdf,"Abstract We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. This paper presents ControlNet, an end-to-end neural network architecture that learns conditional controls for large pretrained text-to-image diffusion models (Stable Diffusion in our implementation). In summary, (1) we propose ControlNet, a neural network architecture that can add spatially localized input conditions to a pretrained text-to-image diffusion model via efficient finetuning, (2) we present pretrained ControlNets to control Stable Diffusion, conditioned on Canny edges, Hough lines, user scribbles, human key points, segmentation maps, shape normals, depths, and cartoon line drawings, and (3) we val-idate the method with ablative experiments comparing to several alternative architectures, and conduct user studies focused on several previous baselines across different tasks." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,lora,\cite{lora},"QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models",,,True,False,"Yuhui Xu and Lingxi Xie and Xiaotao Gu and Xin Chen and Heng Chang and Hengheng Zhang and Zhengsu Chen and Xiaopeng Zhang and Qi Tian",2024.0,,,,,"QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models",[PDF] QA-LORA: QUANTIZATION-AWARE LOW-RANK ADAPTATION OF ...,https://openreview.net/pdf?id=WvFoJccpo8,"Hence,. QA-LoRA is an effective and off-the-shelf method for joint quantization and adaptation of LLMs. 2 RELATED WORK. Large language models (LLMs) (Devlin et" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,DiFashion,\cite{DiFashion},Diffusion Models for Generative Outfit Recommendation,http://arxiv.org/abs/2402.17279v3,"Outfit Recommendation (OR) in the fashion domain has evolved through two stages: Pre-defined Outfit Recommendation and Personalized Outfit Composition. However, both stages are constrained by existing fashion products, limiting their effectiveness in addressing users' diverse fashion needs. Recently, the advent of AI-generated content provides the opportunity for OR to transcend these limitations, showcasing the potential for personalized outfit generation and recommendation. To this end, we introduce a novel task called Generative Outfit Recommendation (GOR), aiming to generate a set of fashion images and compose them into a visually compatible outfit tailored to specific users. The key objectives of GOR lie in the high fidelity, compatibility, and personalization of generated outfits. To achieve these, we propose a generative outfit recommender model named DiFashion, which empowers exceptional diffusion models to accomplish the parallel generation of multiple fashion images. To ensure three objectives, we design three kinds of conditions to guide the parallel generation process and adopt Classifier-Free-Guidance to enhance the alignment between the generated images and conditions. We apply DiFashion on both personalized Fill-In-The-Blank and GOR tasks and conduct extensive experiments on iFashion and Polyvore-U datasets. The quantitative and human-involved qualitative evaluation demonstrate the superiority of DiFashion over competitive baselines.",True,True,"Yiyan Xu and Wenjie Wang and Fuli Feng and Yunshan Ma and Jizhi Zhang and Xiangnan He",2024.0,,,,,Diffusion Models for Generative Outfit Recommendation,Diffusion Models for Generative Outfit Recommendation,http://arxiv.org/pdf/2402.17279v3,"Outfit Recommendation (OR) in the fashion domain has evolved through two stages: Pre-defined Outfit Recommendation and Personalized Outfit Composition. However, both stages are constrained by existing fashion products, limiting their effectiveness in addressing users' diverse fashion needs. Recently, the advent of AI-generated content provides the opportunity for OR to transcend these limitations, showcasing the potential for personalized outfit generation and recommendation. To this end, we introduce a novel task called Generative Outfit Recommendation (GOR), aiming to generate a set of fashion images and compose them into a visually compatible outfit tailored to specific users. The key objectives of GOR lie in the high fidelity, compatibility, and personalization of generated outfits. To achieve these, we propose a generative outfit recommender model named DiFashion, which empowers exceptional diffusion models to accomplish the parallel generation of multiple fashion images. To ensure three objectives, we design three kinds of conditions to guide the parallel generation process and adopt Classifier-Free-Guidance to enhance the alignment between the generated images and conditions. We apply DiFashion on both personalized Fill-In-The-Blank and GOR tasks and conduct extensive experiments on iFashion and Polyvore-U datasets. The quantitative and human-involved qualitative evaluation demonstrate the superiority of DiFashion over competitive baselines." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,yang2018recommendation,\cite{yang2018recommendation},From recommendation to generation: A novel fashion clothing advising framework,,,True,False,"Yang, Zilin and Su, Zhuo and Yang, Yang and Lin, Ge",2018.0,,,,,From recommendation to generation: A novel fashion clothing advising framework,From Recommendation to Generation: A Novel Fashion Clothing ...,https://ieeexplore.ieee.org/document/8634794,"From Recommendation to Generation: A Novel Fashion Clothing Advising Framework | IEEE Conference Publication | IEEE Xplore Publisher: IEEE In this paper, we combine visual features of clothing images, user's implicit feedback and the price factor to construct a recommendation model based on Siamese network and Bayesian personalized ranking to recommend clothing satisfying user's preference and consumption level. Recommendation system is expected to excavate valid information from a large amount of history records to learn user's preference and the attributes of the clothing they wish to purchase. Image 4: Contact IEEE to Subscribe About IEEE _Xplore_ | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | IEEE Privacy Policy" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,Compatibility,\cite{Compatibility},Compatibility Family Learning for Item Recommendation and Generation,http://arxiv.org/abs/1712.01262v1,"Compatibility between items, such as clothes and shoes, is a major factor among customer's purchasing decisions. However, learning ""compatibility"" is challenging due to (1) broader notions of compatibility than those of similarity, (2) the asymmetric nature of compatibility, and (3) only a small set of compatible and incompatible items are observed. We propose an end-to-end trainable system to embed each item into a latent vector and project a query item into K compatible prototypes in the same space. These prototypes reflect the broad notions of compatibility. We refer to both the embedding and prototypes as ""Compatibility Family"". In our learned space, we introduce a novel Projected Compatibility Distance (PCD) function which is differentiable and ensures diversity by aiming for at least one prototype to be close to a compatible item, whereas none of the prototypes are close to an incompatible item. We evaluate our system on a toy dataset, two Amazon product datasets, and Polyvore outfit dataset. Our method consistently achieves state-of-the-art performance. Finally, we show that we can visualize the candidate compatible prototypes using a Metric-regularized Conditional Generative Adversarial Network (MrCGAN), where the input is a projected prototype and the output is a generated image of a compatible item. We ask human evaluators to judge the relative compatibility between our generated images and images generated by CGANs conditioned directly on query items. Our generated images are significantly preferred, with roughly twice the number of votes as others.",True,True,"Yong{-}Siang Shih and Kai{-}Yueh Chang and Hsuan{-}Tien Lin and Min Sun",2018.0,,,,,Compatibility Family Learning for Item Recommendation and Generation,Compatibility Family Learning for Item Recommendation and Generation,http://arxiv.org/pdf/1712.01262v1,"Compatibility between items, such as clothes and shoes, is a major factor among customer's purchasing decisions. However, learning ""compatibility"" is challenging due to (1) broader notions of compatibility than those of similarity, (2) the asymmetric nature of compatibility, and (3) only a small set of compatible and incompatible items are observed. We propose an end-to-end trainable system to embed each item into a latent vector and project a query item into K compatible prototypes in the same space. These prototypes reflect the broad notions of compatibility. We refer to both the embedding and prototypes as ""Compatibility Family"". In our learned space, we introduce a novel Projected Compatibility Distance (PCD) function which is differentiable and ensures diversity by aiming for at least one prototype to be close to a compatible item, whereas none of the prototypes are close to an incompatible item. We evaluate our system on a toy dataset, two Amazon product datasets, and Polyvore outfit dataset. Our method consistently achieves state-of-the-art performance. Finally, we show that we can visualize the candidate compatible prototypes using a Metric-regularized Conditional Generative Adversarial Network (MrCGAN), where the input is a projected prototype and the output is a generated image of a compatible item. We ask human evaluators to judge the relative compatibility between our generated images and images generated by CGANs conditioned directly on query items. Our generated images are significantly preferred, with roughly twice the number of votes as others." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,FashionReGen24,\cite{FashionReGen24},FashionReGen: LLM-Empowered Fashion Report Generation,http://arxiv.org/abs/2403.06660v1,"Fashion analysis refers to the process of examining and evaluating trends, styles, and elements within the fashion industry to understand and interpret its current state, generating fashion reports. It is traditionally performed by fashion professionals based on their expertise and experience, which requires high labour cost and may also produce biased results for relying heavily on a small group of people. In this paper, to tackle the Fashion Report Generation (FashionReGen) task, we propose an intelligent Fashion Analyzing and Reporting system based the advanced Large Language Models (LLMs), debbed as GPT-FAR. Specifically, it tries to deliver FashionReGen based on effective catwalk analysis, which is equipped with several key procedures, namely, catwalk understanding, collective organization and analysis, and report generation. By posing and exploring such an open-ended, complex and domain-specific task of FashionReGen, it is able to test the general capability of LLMs in fashion domain. It also inspires the explorations of more high-level tasks with industrial significance in other domains. Video illustration and more materials of GPT-FAR can be found in https://github.com/CompFashion/FashionReGen.",True,True,"Yujuan Ding and Yunshan Ma and Wenqi Fan and Yige Yao and Tat{-}Seng Chua and Qing Li",2024.0,,,,,FashionReGen: LLM-Empowered Fashion Report Generation,FashionReGen: LLM-Empowered Fashion Report Generation,https://dl.acm.org/doi/10.1145/3589335.3651232,"In this paper, to tackle the Fashion Report Generation (FashionReGen) task, we propose an intelligent Fashion Analyzing and Reporting system" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,CRAFT,\cite{CRAFT},"CRAFT: Complementary Recommendations Using Adversarial Feature Transformer",http://arxiv.org/abs/1804.10871v3,"Traditional approaches for complementary product recommendations rely on behavioral and non-visual data such as customer co-views or co-buys. However, certain domains such as fashion are primarily visual. We propose a framework that harnesses visual cues in an unsupervised manner to learn the distribution of co-occurring complementary items in real world images. Our model learns a non-linear transformation between the two manifolds of source and target complementary item categories (e.g., tops and bottoms in outfits). Given a large dataset of images containing instances of co-occurring object categories, we train a generative transformer network directly on the feature representation space by casting it as an adversarial optimization problem. Such a conditional generative model can produce multiple novel samples of complementary items (in the feature space) for a given query item. The final recommendations are selected from the closest real world examples to the synthesized complementary features. We apply our framework to the task of recommending complementary tops for a given bottom clothing item. The recommendations made by our system are diverse, and are favored by human experts over the baseline approaches.",True,True,"Cong Phuoc Huynh and Arri Ciptadi and Ambrish Tyagi and Amit Agrawal",2018.0,,,,CoRR,"CRAFT: Complementary Recommendations Using Adversarial Feature Transformer",[PDF] Complementary Recommendation by Adversarial Feature Transform,https://assets.amazon.science/ee/8c/533b6ca64dec898bf74950316de1/craft-complementary-recommendation-by-adversarial-feature-transform.pdf,The feature transformer in CRAFT samples a con- ditional distribution to generate diverse and relevant item recommendations for a given query. "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,VITON,\cite{VITON},VITON: An Image-based Virtual Try-on Network,http://arxiv.org/abs/1711.08447v4,"We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. Conditioned upon a new clothing-agnostic yet descriptive person representation, our framework first generates a coarse synthesized image with the target clothing item overlaid on that same person in the same pose. We further enhance the initial blurry clothing area with a refinement network. The network is trained to learn how much detail to utilize from the target clothing item, and where to apply to the person in order to synthesize a photo-realistic image in which the target item deforms naturally with clear visual patterns. Experiments on our newly collected Zalando dataset demonstrate its promise in the image-based virtual try-on task over state-of-the-art generative models.",True,True,"Xintong Han and Zuxuan Wu and Zhe Wu and Ruichi Yu and Larry S. Davis",2018.0,,,,,VITON: An Image-based Virtual Try-on Network,[1711.08447] VITON: An Image-based Virtual Try-on Network,https://arxiv.org/abs/1711.08447,"by X Han · 2017 · Cited by 823 — We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,GP-VTON,\cite{GP-VTON},"{GP-VTON:} Towards General Purpose Virtual Try-On via Collaborative Local-Flow Global-Parsing Learning",,,True,False,"Zhenyu Xie and Zaiyu Huang and Xin Dong and Fuwei Zhao and Haoye Dong and Xijin Zhang and Feida Zhu and Xiaodan Liang",2023.0,,,,,"{GP-VTON:} Towards General Purpose Virtual Try-On via Collaborative Local-Flow Global-Parsing Learning",Incorporating Visual Correspondence into Diffusion Model for Virtual ...,https://openreview.net/forum?id=XXzOzJRyOZ,"Gp-vton: Towards general purpose virtual try-on via collaborative local-flow global-parsing learning. In CVPR, 2023. [5] Li, Xiu and Kampffmeyer, Michael" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,DCI-VTON,\cite{DCI-VTON},"Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow",http://arxiv.org/abs/2308.06101v1,"Virtual try-on is a critical image synthesis task that aims to transfer clothes from one image to another while preserving the details of both humans and clothes. While many existing methods rely on Generative Adversarial Networks (GANs) to achieve this, flaws can still occur, particularly at high resolutions. Recently, the diffusion model has emerged as a promising alternative for generating high-quality images in various applications. However, simply using clothes as a condition for guiding the diffusion model to inpaint is insufficient to maintain the details of the clothes. To overcome this challenge, we propose an exemplar-based inpainting approach that leverages a warping module to guide the diffusion model's generation effectively. The warping module performs initial processing on the clothes, which helps to preserve the local details of the clothes. We then combine the warped clothes with clothes-agnostic person image and add noise as the input of diffusion model. Additionally, the warped clothes is used as local conditions for each denoising process to ensure that the resulting output retains as much detail as possible. Our approach, namely Diffusion-based Conditional Inpainting for Virtual Try-ON (DCI-VTON), effectively utilizes the power of the diffusion model, and the incorporation of the warping module helps to produce high-quality and realistic virtual try-on results. Experimental results on VITON-HD demonstrate the effectiveness and superiority of our method.",True,True,"Junhong Gou and Siyu Sun and Jianfu Zhang and Jianlou Si and Chen Qian and Liqing Zhang",2023.0,,,,,"Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow",bcmi/DCI-VTON-Virtual-Try-On - GitHub,https://github.com/bcmi/DCI-VTON-Virtual-Try-On,"[ACM Multimedia 2023] Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow. We then combine the warped clothes with clothes-agnostic person image and add noise as the input of diffusion model. Our approach effectively utilizes the power of the diffusion model, and the incorporation of the warping module helps to produce high-quality and realistic virtual try-on results. After inference, you can put the results in the VITON-HD for inference and training of the diffusion model. To train a new model on VITON-HD, you should first modify the dataroot of VITON-HD dataset in `configs/viton512.yaml` and then use `main.py` for training. [ACM Multimedia 2023] Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,stableVTON,\cite{stableVTON},"StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On",http://arxiv.org/abs/2312.01725v1,"Given a clothing image and a person image, an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image. In this work, we aim to expand the applicability of the pre-trained diffusion model so that it can be utilized independently for the virtual try-on task.The main challenge is to preserve the clothing details while effectively utilizing the robust generative capability of the pre-trained model. In order to tackle these issues, we propose StableVITON, learning the semantic correspondence between the clothing and the human body within the latent space of the pre-trained diffusion model in an end-to-end manner. Our proposed zero cross-attention blocks not only preserve the clothing details by learning the semantic correspondence but also generate high-fidelity images by utilizing the inherent knowledge of the pre-trained model in the warping process. Through our proposed novel attention total variation loss and applying augmentation, we achieve the sharp attention map, resulting in a more precise representation of clothing details. StableVITON outperforms the baselines in qualitative and quantitative evaluation, showing promising quality in arbitrary person images. Our code is available at https://github.com/rlawjdghek/StableVITON.",True,True,"Jeongho Kim and Gyojung Gu and Minho Park and Sunghyun Park and Jaegul Choo",2023.0,,,,CoRR,"StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On",[CVPR2024] StableVITON: Learning Semantic ...,https://github.com/rlawjdghek/StableVITON,This repository is the official implementation of StableVITON. StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,HMaVTON,\cite{HMaVTON},"Smart Fitting Room: A One-stop Framework for Matching-aware Virtual Try-on",http://arxiv.org/abs/2401.16825v2,"The development of virtual try-on has revolutionized online shopping by allowing customers to visualize themselves in various fashion items, thus extending the in-store try-on experience to the cyber space. Although virtual try-on has attracted considerable research initiatives, existing systems only focus on the quality of image generation, overlooking whether the fashion item is a good match to the given person and clothes. Recognizing this gap, we propose to design a one-stop Smart Fitting Room, with the novel formulation of matching-aware virtual try-on. Following this formulation, we design a Hybrid Matching-aware Virtual Try-On Framework (HMaVTON), which combines retrieval-based and generative methods to foster a more personalized virtual try-on experience. This framework integrates a hybrid mix-and-match module and an enhanced virtual try-on module. The former can recommend fashion items available on the platform to boost sales and generate clothes that meets the diverse tastes of consumers. The latter provides high-quality try-on effects, delivering a one-stop shopping service. To validate the effectiveness of our approach, we enlist the expertise of fashion designers for a professional evaluation, assessing the rationality and diversity of the clothes combinations and conducting an evaluation matrix analysis. Our method significantly enhances the practicality of virtual try-on. The code is available at https://github.com/Yzcreator/HMaVTON.",True,True,"Mingzhe Yu and Yunshan Ma and Lei Wu and Kai Cheng and Xue Li and Lei Meng and Tat{-}Seng Chua",2024.0,,,,,"Smart Fitting Room: A One-stop Framework for Matching-aware Virtual Try-on",A One-stop Framework for Matching-aware Virtual Try-On,https://dl.acm.org/doi/10.1145/3652583.3658064,This framework integrates a hybrid mix-and-match module and an enhanced virtual try-on module. The former can recommend fashion items available "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,Jedi,\cite{Jedi},"JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation",http://arxiv.org/abs/2407.06187v1,"Personalized text-to-image generation models enable users to create images that depict their individual possessions in diverse scenes, finding applications in various domains. To achieve the personalization capability, existing methods rely on finetuning a text-to-image foundation model on a user's custom dataset, which can be non-trivial for general users, resource-intensive, and time-consuming. Despite attempts to develop finetuning-free methods, their generation quality is much lower compared to their finetuning counterparts. In this paper, we propose Joint-Image Diffusion (\jedi), an effective technique for learning a finetuning-free personalization model. Our key idea is to learn the joint distribution of multiple related text-image pairs that share a common subject. To facilitate learning, we propose a scalable synthetic dataset generation technique. Once trained, our model enables fast and easy personalization at test time by simply using reference images as input during the sampling process. Our approach does not require any expensive optimization process or additional modules and can faithfully preserve the identity represented by any number of reference images. Experimental results show that our model achieves state-of-the-art generation quality, both quantitatively and qualitatively, significantly outperforming both the prior finetuning-based and finetuning-free personalization baselines.",True,True,"Yu Zeng and Vishal M. Patel and Haochen Wang and Xun Huang and Ting{-}Chun Wang and Ming{-}Yu Liu and Yogesh Balaji",2024.0,,,,,"JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation",[PDF] JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized ...,https://openaccess.thecvf.com/content/CVPR2024/papers/Zeng_JeDi_Joint-Image_Diffusion_Models_for_Finetuning-Free_Personalized_Text-to-Image_Generation_CVPR_2024_paper.pdf,"JeDi is a finetuning-free model for personalized text-to-image generation, learning from text-image pairs and using reference images for fast personalization." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,ELITE,\cite{ELITE},"ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation",http://arxiv.org/abs/2302.13848v2,"In addition to the unprecedented ability in imaginary creation, large text-to-image models are expected to take customized concepts in image generation. Existing works generally learn such concepts in an optimization-based manner, yet bringing excessive computation or memory burden. In this paper, we instead propose a learning-based encoder, which consists of a global and a local mapping networks for fast and accurate customized text-to-image generation. In specific, the global mapping network projects the hierarchical features of a given image into multiple new words in the textual word embedding space, i.e., one primary word for well-editable concept and other auxiliary words to exclude irrelevant disturbances (e.g., background). In the meantime, a local mapping network injects the encoded patch features into cross attention layers to provide omitted details, without sacrificing the editability of primary concepts. We compare our method with existing optimization-based approaches on a variety of user-defined concepts, and demonstrate that our method enables high-fidelity inversion and more robust editability with a significantly faster encoding process. Our code is publicly available at https://github.com/csyxwei/ELITE.",True,True,"Yuxiang Wei and Yabo Zhang and Zhilong Ji and Jinfeng Bai and Lei Zhang and Wangmeng Zuo",2023.0,,,,,"ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation",ELITE: Encoding Visual Concepts into Textual Embeddings for ...,https://openaccess.thecvf.com/content/ICCV2023/papers/Wei_ELITE_Encoding_Visual_Concepts_into_Textual_Embeddings_for_Customized_Text-to-Image_ICCV_2023_paper.pdf,"by Y Wei · 2023 · Cited by 417 — To achieve fast and accurate customized text-to-image generation, we propose an encoder ELITE to encode the visual concept into textual embeddings. As" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,PathchDPO,\cite{PathchDPO},"PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation",http://arxiv.org/abs/2412.03177v2,"Finetuning-free personalized image generation can synthesize customized images without test-time finetuning, attracting wide research interest owing to its high efficiency. Current finetuning-free methods simply adopt a single training stage with a simple image reconstruction task, and they typically generate low-quality images inconsistent with the reference images during test-time. To mitigate this problem, inspired by the recent DPO (i.e., direct preference optimization) technique, this work proposes an additional training stage to improve the pre-trained personalized generation models. However, traditional DPO only determines the overall superiority or inferiority of two samples, which is not suitable for personalized image generation because the generated images are commonly inconsistent with the reference images only in some local image patches. To tackle this problem, this work proposes PatchDPO that estimates the quality of image patches within each generated image and accordingly trains the model. To this end, PatchDPO first leverages the pre-trained vision model with a proposed self-supervised training method to estimate the patch quality. Next, PatchDPO adopts a weighted training approach to train the model with the estimated patch quality, which rewards the image patches with high quality while penalizing the image patches with low quality. Experiment results demonstrate that PatchDPO significantly improves the performance of multiple pre-trained personalized generation models, and achieves state-of-the-art performance on both single-object and multi-object personalized image generation. Our code is available at https://github.com/hqhQAQ/PatchDPO.",True,True,"Qihan Huang and Long Chan and Jinlong Liu and Wanggui He and Hao Jiang and Mingli Song and Jie Song",2024.0,,,,CoRR,"PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation",[CVPR 2025] PatchDPO: Patch-level DPO for Finetuning- ...,https://github.com/hqhQAQ/PatchDPO,"GitHub - hqhQAQ/PatchDPO: [CVPR 2025] PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation To tackle this problem, this work proposes PatchDPO that estimates the quality of image patches within each generated image and accordingly trains the model. With PatchDPO, our model achieves state-of-the-art performance on personalized image generation, with only 4 hours of training time on 8 GPUs, as shown in Table 1 & 2. Detailedly, `$output_dir` contains 30 subfolders (corresponding to 30 objects), and each subfolder saves the generated images for each object, which is also named with this object (_i.e._, the folder names are consistent with those in dreambench/dataset). [CVPR 2025] PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,BDPO,\cite{BDPO},"Boost Your Own Human Image Generation Model via Direct Preference Optimization with {AI} Feedback",,,True,False,"Sanghyeon Na and Yonggyu Kim and Hyunjoon Lee",2024.0,,,,CoRR,"Boost Your Own Human Image Generation Model via Direct Preference Optimization with {AI} Feedback",Boost Your Own Human Image Generation Model via Direct ...,https://ui.adsabs.harvard.edu/abs/2024arXiv240520216N/abstract,"Boost Your Human Image Generation Model via Direct Preference Optimization - Astrophysics Data System * About ADS Therefore, our approach, HG-DPO (Human image Generation through DPO), employs a novel curriculum learning framework that gradually improves the output of the model toward greater realism, making training more feasible. The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement _80NSSC25M7105_ * About ADS * ADS Help #### Missing/Incorrect Record Submit a missing record or correct an existing record.#### Missing References Submit missing references to an existing ADS record.#### Associated Articles Submit associated articles to an existing record (e.g. arXiv / published paper).#### General Feedback Send your comments and suggestions for improvements.;)" "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,DPO,\cite{DPO},"Direct Preference Optimization: Your Language Model is Secretly a Reward Model",http://arxiv.org/abs/2305.18290v3,"While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality of model generations and fine-tune the unsupervised LM to align with these preferences, often with reinforcement learning from human feedback (RLHF). However, RLHF is a complex and often unstable procedure, first fitting a reward model that reflects the human preferences, and then fine-tuning the large unsupervised LM using reinforcement learning to maximize this estimated reward without drifting too far from the original model. In this paper we introduce a new parameterization of the reward model in RLHF that enables extraction of the corresponding optimal policy in closed form, allowing us to solve the standard RLHF problem with only a simple classification loss. The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant, and computationally lightweight, eliminating the need for sampling from the LM during fine-tuning or performing significant hyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods. Notably, fine-tuning with DPO exceeds PPO-based RLHF in ability to control sentiment of generations, and matches or improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.",True,True,"Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn",2023.0,,,,,"Direct Preference Optimization: Your Language Model is Secretly a Reward Model",Direct Preference Optimization: Your Language Model is Secretly a ...,https://arxiv.org/abs/2305.18290,"**arXiv:2305.18290** (cs) View a PDF of the paper titled Direct Preference Optimization: Your Language Model is Secretly a Reward Model, by Rafael Rafailov and 5 other authors View a PDF of the paper titled Direct Preference Optimization: Your Language Model is Secretly a Reward Model, by Rafael Rafailov and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle " "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,Diffusion-DPO,\cite{Diffusion-DPO},Diffusion Model Alignment Using Direct Preference Optimization,http://arxiv.org/abs/2311.12908v1,"Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences. In contrast to LLMs, human preference learning has not been widely explored in text-to-image diffusion models; the best existing approach is to fine-tune a pretrained model using carefully curated high quality images and captions to improve visual appeal and text alignment. We propose Diffusion-DPO, a method to align diffusion models to human preferences by directly optimizing on human comparison data. Diffusion-DPO is adapted from the recently developed Direct Preference Optimization (DPO), a simpler alternative to RLHF which directly optimizes a policy that best satisfies human preferences under a classification objective. We re-formulate DPO to account for a diffusion model notion of likelihood, utilizing the evidence lower bound to derive a differentiable objective. Using the Pick-a-Pic dataset of 851K crowdsourced pairwise preferences, we fine-tune the base model of the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with Diffusion-DPO. Our fine-tuned base model significantly outperforms both base SDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement model in human evaluation, improving visual appeal and prompt alignment. We also develop a variant that uses AI feedback and has comparable performance to training on human preferences, opening the door for scaling of diffusion model alignment methods.",True,True,"Bram Wallace and Meihua Dang and Rafael Rafailov and Linqi Zhou and Aaron Lou and Senthil Purushwalkam and Stefano Ermon and Caiming Xiong and Shafiq Joty and Nikhil Naik",2023.0,,,,CoRR,Diffusion Model Alignment Using Direct Preference Optimization,Diffusion Model Alignment Using Direct Preference Optimization,http://arxiv.org/pdf/2311.12908v1,"Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences. In contrast to LLMs, human preference learning has not been widely explored in text-to-image diffusion models; the best existing approach is to fine-tune a pretrained model using carefully curated high quality images and captions to improve visual appeal and text alignment. We propose Diffusion-DPO, a method to align diffusion models to human preferences by directly optimizing on human comparison data. Diffusion-DPO is adapted from the recently developed Direct Preference Optimization (DPO), a simpler alternative to RLHF which directly optimizes a policy that best satisfies human preferences under a classification objective. We re-formulate DPO to account for a diffusion model notion of likelihood, utilizing the evidence lower bound to derive a differentiable objective. Using the Pick-a-Pic dataset of 851K crowdsourced pairwise preferences, we fine-tune the base model of the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with Diffusion-DPO. Our fine-tuned base model significantly outperforms both base SDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement model in human evaluation, improving visual appeal and prompt alignment. We also develop a variant that uses AI feedback and has comparable performance to training on human preferences, opening the door for scaling of diffusion model alignment methods." "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,D3PO,\cite{D3PO},"Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model",http://arxiv.org/abs/2311.13231v3,"Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences, then leverage RL techniques to fine-tune the underlying models. However, crafting an efficient reward model demands extensive datasets, optimal architecture, and manual hyperparameter tuning, making the process both time and cost-intensive. The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model. However, the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue, we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model, it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model, proving to be more direct, cost-effective, and minimizing computational overhead. In experiments, our method uses the relative scale of objectives as a proxy for human preference, delivering comparable results to methods using ground-truth rewards. Moreover, D3PO demonstrates the ability to reduce image distortion rates and generate safer images, overcoming challenges lacking robust reward models. Our code is publicly available at https://github.com/yk7333/D3PO.",True,True,"Kai Yang and Jian Tao and Jiafei Lyu and Chunjiang Ge and Jiaxin Chen and Qimai Li and Weihan Shen and Xiaolong Zhu and Xiu Li",2023.0,,,,CoRR,"Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model","yk7333/d3po: [CVPR 2024] Code for the paper ""Using ...",https://github.com/yk7333/d3po,D3PO can directly fine-tune the diffusion model through human feedback without the need to train a reward model. Our repository's code is referenced from DDPO. "FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization",2504.12900v1,SPO,\cite{SPO},"Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step",,,True,False,"Zhanhao Liang and Yuhui Yuan and Shuyang Gu and Bohan Chen and Tiankai Hang and Ji Li and Liang Zheng",2024.0,,,,CoRR,"Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step",AK - X,https://x.com/_akhaliq/status/1798920414644642035?lang=en,"Step-aware Preference Optimization Aligning Preference with Denoising Performance at Each Step Recently, Direct Preference Optimization (DPO)" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,liSurveyGenerativeIR2024,\cite{liSurveyGenerativeIR2024},"From Matching to Generation: A Survey on Generative Information Retrieval",http://arxiv.org/abs/2404.14851v4,"Information Retrieval (IR) systems are crucial tools for users to access information, which have long been dominated by traditional methods relying on similarity matching. With the advancement of pre-trained language models, generative information retrieval (GenIR) emerges as a novel paradigm, attracting increasing attention. Based on the form of information provided to users, current research in GenIR can be categorized into two aspects: \textbf{(1) Generative Document Retrieval} (GR) leverages the generative model's parameters for memorizing documents, enabling retrieval by directly generating relevant document identifiers without explicit indexing. \textbf{(2) Reliable Response Generation} employs language models to directly generate information users seek, breaking the limitations of traditional IR in terms of document granularity and relevance matching while offering flexibility, efficiency, and creativity to meet practical needs. This paper aims to systematically review the latest research progress in GenIR. We will summarize the advancements in GR regarding model training and structure, document identifier, incremental learning, etc., as well as progress in reliable response generation in aspects of internal knowledge memorization, external knowledge augmentation, etc. We also review the evaluation, challenges and future developments in GenIR systems. This review aims to offer a comprehensive reference for researchers, encouraging further development in the GenIR field. Github Repository: https://github.com/RUC-NLPIR/GenIR-Survey",True,True,Xiaoxi Li and Jiajie Jin and Yujia Zhou and Yuyao Zhang and Peitian Zhang and Yutao Zhu and Zhicheng Dou,,,https://doi.org/10.48550/arXiv.2404.14851,10.48550/ARXIV.2404.14851,CoRR,"From Matching to Generation: A Survey on Generative Information Retrieval",From Matching to Generation: A Survey on Generative Information ...,https://dl.acm.org/doi/10.1145/3722552,"Currently, research in GenIR primarily focuses on two main patterns: (1) Generative Retrieval (GR), which involves retrieving documents by generating their" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,white2025surveyinformationaccess,\cite{white2025surveyinformationaccess},Information Access in the Era of Generative AI,,,True,False,Ryen W. White and Chirag Shah,,,https://doi.org/10.1007/978-3-031-73147-1,,,Information Access in the Era of Generative AI,Information Access in the Era of Generative AI - SpringerLink,https://link.springer.com/book/10.1007/978-3-031-73147-1,"This book discusses GenAI and its role in information access, covering topics like e.g. interactions, evaluations, recommendations and future developments." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,metzlerRethinkingSearch2021,\cite{metzlerRethinkingSearch2021},Rethinking Search: Making Domain Experts out of Dilettantes,http://arxiv.org/abs/2105.02274v2,"When experiencing an information need, users want to engage with a domain expert, but often turn to an information retrieval system, such as a search engine, instead. Classical information retrieval systems do not answer information needs directly, but instead provide references to (hopefully authoritative) answers. Successful question answering systems offer a limited corpus created on-demand by human experts, which is neither timely nor scalable. Pre-trained language models, by contrast, are capable of directly generating prose that may be responsive to an information need, but at present they are dilettantes rather than domain experts -- they do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over. This paper examines how ideas from classical information retrieval and pre-trained language models can be synthesized and evolved into systems that truly deliver on the promise of domain expert advice.",True,True,"Metzler, Donald and Tay, Yi and Bahri, Dara and Najork, Marc",,,https://doi.org/10.1145/3476415.3476428,10.1145/3476415.3476428,SIGIR Forum,Rethinking Search: Making Domain Experts out of Dilettantes,Rethinking Search: Making Domain Experts out of Dilettantes,http://arxiv.org/pdf/2105.02274v2,"When experiencing an information need, users want to engage with a domain expert, but often turn to an information retrieval system, such as a search engine, instead. Classical information retrieval systems do not answer information needs directly, but instead provide references to (hopefully authoritative) answers. Successful question answering systems offer a limited corpus created on-demand by human experts, which is neither timely nor scalable. Pre-trained language models, by contrast, are capable of directly generating prose that may be responsive to an information need, but at present they are dilettantes rather than domain experts -- they do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over. This paper examines how ideas from classical information retrieval and pre-trained language models can be synthesized and evolved into systems that truly deliver on the promise of domain expert advice." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,decaoAutoregressiveEntityRetrieval2020,\cite{decaoAutoregressiveEntityRetrieval2020},Autoregressive Entity Retrieval,http://arxiv.org/abs/2010.00904v3,"Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. Current approaches can be understood as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach has several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion. This mitigates the aforementioned technical issues since: (i) the autoregressive formulation directly captures relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the softmax loss is computed without subsampling negative data. We experiment with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their names. Code and pre-trained models at https://github.com/facebookresearch/GENRE.",True,True,Nicola De Cao and Gautier Izacard and Sebastian Riedel and Fabio Petroni,,,https://openreview.net/forum?id=5k8F6UU39V,,,Autoregressive Entity Retrieval,Autoregressive Entity Retrieval,http://arxiv.org/pdf/2010.00904v3,"Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. Current approaches can be understood as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach has several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion. This mitigates the aforementioned technical issues since: (i) the autoregressive formulation directly captures relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the softmax loss is computed without subsampling negative data. We experiment with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their names. Code and pre-trained models at https://github.com/facebookresearch/GENRE." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,sunLearningTokenizeGenerative2023,\cite{sunLearningTokenizeGenerative2023},Learning to Tokenize for Generative Retrieval,http://arxiv.org/abs/2304.04171v1,"Conventional document retrieval techniques are mainly based on the index-retrieve paradigm. It is challenging to optimize pipelines based on this paradigm in an end-to-end manner. As an alternative, generative retrieval represents documents as identifiers (docid) and retrieves documents by generating docids, enabling end-to-end modeling of document retrieval tasks. However, it is an open question how one should define the document identifiers. Current approaches to the task of defining document identifiers rely on fixed rule-based docids, such as the title of a document or the result of clustering BERT embeddings, which often fail to capture the complete semantic information of a document. We propose GenRet, a document tokenization learning method to address the challenge of defining document identifiers for generative retrieval. GenRet learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach. Three components are included in GenRet: (i) a tokenization model that produces docids for documents; (ii) a reconstruction model that learns to reconstruct a document based on a docid; and (iii) a sequence-to-sequence retrieval model that generates relevant document identifiers directly for a designated query. By using an auto-encoding framework, GenRet learns semantic docids in a fully end-to-end manner. We also develop a progressive training scheme to capture the autoregressive nature of docids and to stabilize training. We conduct experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the effectiveness of GenRet. GenRet establishes the new state-of-the-art on the NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet can achieve significant improvements on the unseen documents. GenRet also outperforms comparable baselines on MS MARCO and BEIR, demonstrating the method's generalizability.",True,True,"Sun, Weiwei and Yan, Lingyong and Chen, Zheng and Wang, Shuaiqiang and Zhu, Haichao and Ren, Pengjie and Chen, Zhumin and Yin, Dawei and Rijke, Maarten and Ren, Zhaochun",,,https://proceedings.neurips.cc/paper_files/paper/2023/file/91228b942a4528cdae031c1b68b127e8-Paper-Conference.pdf,,,Learning to Tokenize for Generative Retrieval,Learning to Tokenize for Generative Retrieval,http://arxiv.org/pdf/2304.04171v1,"Conventional document retrieval techniques are mainly based on the index-retrieve paradigm. It is challenging to optimize pipelines based on this paradigm in an end-to-end manner. As an alternative, generative retrieval represents documents as identifiers (docid) and retrieves documents by generating docids, enabling end-to-end modeling of document retrieval tasks. However, it is an open question how one should define the document identifiers. Current approaches to the task of defining document identifiers rely on fixed rule-based docids, such as the title of a document or the result of clustering BERT embeddings, which often fail to capture the complete semantic information of a document. We propose GenRet, a document tokenization learning method to address the challenge of defining document identifiers for generative retrieval. GenRet learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach. Three components are included in GenRet: (i) a tokenization model that produces docids for documents; (ii) a reconstruction model that learns to reconstruct a document based on a docid; and (iii) a sequence-to-sequence retrieval model that generates relevant document identifiers directly for a designated query. By using an auto-encoding framework, GenRet learns semantic docids in a fully end-to-end manner. We also develop a progressive training scheme to capture the autoregressive nature of docids and to stabilize training. We conduct experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the effectiveness of GenRet. GenRet establishes the new state-of-the-art on the NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet can achieve significant improvements on the unseen documents. GenRet also outperforms comparable baselines on MS MARCO and BEIR, demonstrating the method's generalizability." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,wangNeuralCorpusIndexer2023,\cite{wangNeuralCorpusIndexer2023},A Neural Corpus Indexer for Document Retrieval,http://arxiv.org/abs/2206.02743v3,"Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method.",True,True,Yujing Wang and Yingyan Hou and Haonan Wang and Ziming Miao and Shibin Wu and Qi Chen and Yuqing Xia and Chengmin Chi and Guoshuai Zhao and Zheng Liu and Xing Xie and Hao Sun and Weiwei Deng and Qi Zhang and Mao Yang,,,http://papers.nips.cc/paper\_files/paper/2022/hash/a46156bd3579c3b268108ea6aca71d13-Abstract-Conference.html,,,A Neural Corpus Indexer for Document Retrieval,A Neural Corpus Indexer for Document Retrieval,http://arxiv.org/pdf/2206.02743v3,"Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,liLearningRankGenerative2023,\cite{liLearningRankGenerative2023},Learning to Rank in Generative Retrieval,http://arxiv.org/abs/2306.15222v2,"Generative retrieval stands out as a promising new paradigm in text retrieval that aims to generate identifier strings of relevant passages as the retrieval target. This generative paradigm taps into powerful generative language models, distinct from traditional sparse or dense retrieval methods. However, only learning to generate is insufficient for generative retrieval. Generative retrieval learns to generate identifiers of relevant passages as an intermediate goal and then converts predicted identifiers into the final passage rank list. The disconnect between the learning objective of autoregressive models and the desired passage ranking target leads to a learning gap. To bridge this gap, we propose a learning-to-rank framework for generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn to rank passages directly, optimizing the autoregressive model toward the final passage ranking target via a rank loss. This framework only requires an additional learning-to-rank training phase to enhance current generative retrieval systems and does not add any burden to the inference stage. We conducted experiments on three public benchmarks, and the results demonstrate that LTRGR achieves state-of-the-art performance among generative retrieval methods. The code and checkpoints are released at https://github.com/liyongqi67/LTRGR.",True,True,Yongqi Li and Nan Yang and Liang Wang and Furu Wei and Wenjie Li,,,https://doi.org/10.1609/aaai.v38i8.28717,10.1609/AAAI.V38I8.28717,,Learning to Rank in Generative Retrieval,Learning to Rank in Generative Retrieval,http://arxiv.org/pdf/2306.15222v2,"Generative retrieval stands out as a promising new paradigm in text retrieval that aims to generate identifier strings of relevant passages as the retrieval target. This generative paradigm taps into powerful generative language models, distinct from traditional sparse or dense retrieval methods. However, only learning to generate is insufficient for generative retrieval. Generative retrieval learns to generate identifiers of relevant passages as an intermediate goal and then converts predicted identifiers into the final passage rank list. The disconnect between the learning objective of autoregressive models and the desired passage ranking target leads to a learning gap. To bridge this gap, we propose a learning-to-rank framework for generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn to rank passages directly, optimizing the autoregressive model toward the final passage ranking target via a rank loss. This framework only requires an additional learning-to-rank training phase to enhance current generative retrieval systems and does not add any burden to the inference stage. We conducted experiments on three public benchmarks, and the results demonstrate that LTRGR achieves state-of-the-art performance among generative retrieval methods. The code and checkpoints are released at https://github.com/liyongqi67/LTRGR." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,Zhuang2022BridgingTG,\cite{Zhuang2022BridgingTG},"Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation",http://arxiv.org/abs/2206.10128v3,"The Differentiable Search Index (DSI) is an emerging paradigm for information retrieval. Unlike traditional retrieval architectures where index and retrieval are two different and separate components, DSI uses a single transformer model to perform both indexing and retrieval. In this paper, we identify and tackle an important issue of current DSI models: the data distribution mismatch that occurs between the DSI indexing and retrieval processes. Specifically, we argue that, at indexing, current DSI methods learn to build connections between the text of long documents and the identifier of the documents, but then retrieval of document identifiers is based on queries that are commonly much shorter than the indexed documents. This problem is further exacerbated when using DSI for cross-lingual retrieval, where document text and query text are in different languages. To address this fundamental problem of current DSI models, we propose a simple yet effective indexing framework for DSI, called DSI-QG. When indexing, DSI-QG represents documents with a number of potentially relevant queries generated by a query generation model and re-ranked and filtered by a cross-encoder ranker. The presence of these queries at indexing allows the DSI models to connect a document identifier to a set of queries, hence mitigating data distribution mismatches present between the indexing and the retrieval phases. Empirical results on popular mono-lingual and cross-lingual passage retrieval datasets show that DSI-QG significantly outperforms the original DSI model.",True,True,"Shengyao Zhuang and Houxing Ren and Linjun Shou and Jian Pei and Ming Gong and Zuccon, Guido and Daxin Jiang",,,https://api.semanticscholar.org/CorpusID:249890267,,ArXiv,"Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation",Bridging the Gap Between Indexing and Retrieval for Differentiable ...,https://arxiv.org/abs/2206.10128,Missing: 04/08/2025 Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,Zhang2023TermSetsCB,\cite{Zhang2023TermSetsCB},Term-Sets Can Be Strong Document Identifiers For Auto-Regressive Search Engines,,,True,False,Peitian Zhang and Zheng Liu and Yujia Zhou and Zhicheng Dou and Zhao Cao,,,https://api.semanticscholar.org/CorpusID:258841428,,ArXiv,Term-Sets Can Be Strong Document Identifiers For Auto-Regressive Search Engines,[PDF] Term-Sets Can Be Strong Document Identifiers For Auto-Regressive ...,https://openreview.net/pdf?id=uZv73g6f1mL,We propose a novel framework AutoTSG for auto-regressive search engines. The proposed method is featured by its unordered term-based document identifier and the Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,yangAutoSearchIndexer2023,\cite{yangAutoSearchIndexer2023},Auto Search Indexer for End-to-End Document Retrieval,http://arxiv.org/abs/2310.12455v2,"Generative retrieval, which is a new advanced paradigm for document retrieval, has recently attracted research interests, since it encodes all documents into the model and directly generates the retrieved documents. However, its power is still underutilized since it heavily relies on the ""preprocessed"" document identifiers (docids), thus limiting its retrieval performance and ability to retrieve new documents. In this paper, we propose a novel fully end-to-end retrieval paradigm. It can not only end-to-end learn the best docids for existing and new documents automatically via a semantic indexing module, but also perform end-to-end document retrieval via an encoder-decoder-based generative model, namely Auto Search Indexer (ASI). Besides, we design a reparameterization mechanism to combine the above two modules into a joint optimization framework. Extensive experimental results demonstrate the superiority of our model over advanced baselines on both public and industrial datasets and also verify the ability to deal with new documents.",True,True,"Yang, Tianchi and Song, Minghui and Zhang, Zihan and Huang, Haizhen and Deng, Weiwei and Sun, Feng and Zhang, Qi",,,,,,Auto Search Indexer for End-to-End Document Retrieval,Auto Search Indexer for End-to-End Document Retrieval,https://openreview.net/forum?id=ZhZFUOV5hb¬eId=ORsULzg9Ip,"This paper presents an end-to-end generative information retrieval pipeline, Auto Search Indexer (ASI), that supports document-id assignment as well as" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,tang2023semantic,\cite{tang2023semantic},"Semantic-Enhanced Differentiable Search Index Inspired by Learning Strategies",http://arxiv.org/abs/2305.15115v1,"Recently, a new paradigm called Differentiable Search Index (DSI) has been proposed for document retrieval, wherein a sequence-to-sequence model is learned to directly map queries to relevant document identifiers. The key idea behind DSI is to fully parameterize traditional ``index-retrieve'' pipelines within a single neural model, by encoding all documents in the corpus into the model parameters. In essence, DSI needs to resolve two major questions: (1) how to assign an identifier to each document, and (2) how to learn the associations between a document and its identifier. In this work, we propose a Semantic-Enhanced DSI model (SE-DSI) motivated by Learning Strategies in the area of Cognitive Psychology. Our approach advances original DSI in two ways: (1) For the document identifier, we take inspiration from Elaboration Strategies in human learning. Specifically, we assign each document an Elaborative Description based on the query generation technique, which is more meaningful than a string of integers in the original DSI; and (2) For the associations between a document and its identifier, we take inspiration from Rehearsal Strategies in human learning. Specifically, we select fine-grained semantic features from a document as Rehearsal Contents to improve document memorization. Both the offline and online experiments show improved retrieval performance over prevailing baselines.",True,True,"Tang, Yubao and Zhang, Ruqing and Guo, Jiafeng and Chen, Jiangui and Zhu, Zuowei and Wang, Shuaiqiang and Yin, Dawei and Cheng, Xueqi",,,https://doi.org/10.1145/3580305.3599903,10.1145/3580305.3599903,,"Semantic-Enhanced Differentiable Search Index Inspired by Learning Strategies",Semantic-Enhanced Differentiable Search Index Inspired ...,https://dl.acm.org/doi/10.1145/3580305.3599903,"In this work, we propose a Semantic-Enhanced DSI model (SE-DSI) motivated by Learning Strategies in the area of Cognitive Psychology." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,tang2024generative,\cite{tang2024generative},Generative Retrieval Meets Multi-Graded Relevance,http://arxiv.org/abs/2409.18409v1,"Generative retrieval represents a novel approach to information retrieval. It uses an encoder-decoder architecture to directly produce relevant document identifiers (docids) for queries. While this method offers benefits, current approaches are limited to scenarios with binary relevance data, overlooking the potential for documents to have multi-graded relevance. Extending generative retrieval to accommodate multi-graded relevance poses challenges, including the need to reconcile likelihood probabilities for docid pairs and the possibility of multiple relevant documents sharing the same identifier. To address these challenges, we introduce a framework called GRaded Generative Retrieval (GR$^2$). GR$^2$ focuses on two key components: ensuring relevant and distinct identifiers, and implementing multi-graded constrained contrastive training. First, we create identifiers that are both semantically relevant and sufficiently distinct to represent individual documents effectively. This is achieved by jointly optimizing the relevance and distinctness of docids through a combination of docid generation and autoencoder models. Second, we incorporate information about the relationship between relevance grades to guide the training process. We use a constrained contrastive training strategy to bring the representations of queries and the identifiers of their relevant documents closer together, based on their respective relevance grades. Extensive experiments on datasets with both multi-graded and binary relevance demonstrate the effectiveness of GR$^2$.",True,True,Yubao Tang and Ruqing Zhang and Jiafeng Guo and Maarten de Rijke and Wei Chen and Xueqi Cheng,,,https://openreview.net/forum?id=2xTkeyJFJb,,,Generative Retrieval Meets Multi-Graded Relevance,Generative Retrieval Meets Multi-Graded Relevance,https://proceedings.neurips.cc/paper_files/paper/2024/hash/853e781cb2af58956ed5c89aa59da3fc-Abstract-Conference.html,"Generative retrieval represents a novel approach to information retrieval, utilizing an encoder-decoder architecture to directly produce relevant document" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,wuGenerativeRetrievalMultiVector2024,\cite{wuGenerativeRetrievalMultiVector2024},Generative Retrieval as Multi-Vector Dense Retrieval,http://arxiv.org/abs/2404.00684v1,"Generative retrieval generates identifiers of relevant documents in an end-to-end manner using a sequence-to-sequence architecture for a given query. The relation between generative retrieval and other retrieval methods, especially those based on matching within dense retrieval models, is not yet fully comprehended. Prior work has demonstrated that generative retrieval with atomic identifiers is equivalent to single-vector dense retrieval. Accordingly, generative retrieval exhibits behavior analogous to hierarchical search within a tree index in dense retrieval when using hierarchical semantic identifiers. However, prior work focuses solely on the retrieval stage without considering the deep interactions within the decoder of generative retrieval. In this paper, we fill this gap by demonstrating that generative retrieval and multi-vector dense retrieval share the same framework for measuring the relevance to a query of a document. Specifically, we examine the attention layer and prediction head of generative retrieval, revealing that generative retrieval can be understood as a special case of multi-vector dense retrieval. Both methods compute relevance as a sum of products of query and document vectors and an alignment matrix. We then explore how generative retrieval applies this framework, employing distinct strategies for computing document token vectors and the alignment matrix. We have conducted experiments to verify our conclusions and show that both paradigms exhibit commonalities of term matching in their alignment matrix.",True,True,Shiguang Wu and Wenda Wei and Mengqi Zhang and Zhumin Chen and Jun Ma and Zhaochun Ren and Maarten de Rijke and Pengjie Ren,,,https://doi.org/10.1145/3626772.3657697,10.1145/3626772.3657697,,Generative Retrieval as Multi-Vector Dense Retrieval,Generative Retrieval as Multi-Vector Dense Retrieval,https://dl.acm.org/doi/10.1145/3626772.3657697,Generative retrieval and multi-vector dense retrieval share the same framework for measuring the relevance to a query of a document. Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,seal2022,\cite{seal2022},"Autoregressive Search Engines: Generating Substrings as Document Identifiers",http://arxiv.org/abs/2204.10628v1,"Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive language models are emerging as the de-facto standard for generating answers, with newer and more powerful systems emerging at an astonishing pace. In this paper we argue that all this (and future) progress can be directly applied to the retrieval problem with minimal intervention to the models' architecture. Previous work has explored ways to partition the search space into hierarchical structures and retrieve documents by autoregressively generating their unique identifier. In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers. This setup allows us to use an autoregressive model to generate and score distinctive ngrams, that are then mapped to full passages through an efficient data structure. Empirically, we show this not only outperforms prior autoregressive approaches but also leads to an average improvement of at least 10 points over more established retrieval solutions for passage-level retrieval on the KILT benchmark, establishing new state-of-the-art downstream performance on some datasets, while using a considerably lighter memory footprint than competing systems. Code and pre-trained models at https://github.com/facebookresearch/SEAL.",True,True,"Bevilacqua, Michele and Ottaviano, Giuseppe and Lewis, Patrick and Yih, Scott and Riedel, Sebastian and Petroni, Fabio",,,,,Advances in Neural Information Processing Systems,"Autoregressive Search Engines: Generating Substrings as Document Identifiers",[PDF] Autoregressive Search Engines: Generating Substrings as ...,https://proceedings.neurips.cc/paper_files/paper/2022/file/cd88d62a2063fdaf7ce6f9068fb15dcd-Paper-Conference.pdf,"One way to approach retrieval with autoregressive models makes use of unique identifiers, i.e., string pointers to documents that are in some way easier to" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,tayTransformerMemoryDifferentiable2022a,\cite{tayTransformerMemoryDifferentiable2022a},Transformer Memory as a Differentiable Search Index,http://arxiv.org/abs/2202.06991v3,"In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.",True,True,Yi Tay and Vinh Tran and Mostafa Dehghani and Jianmo Ni and Dara Bahri and Harsh Mehta and Zhen Qin and Kai Hui and Zhe Zhao and Jai Prakash Gupta and Tal Schuster and William W. Cohen and Donald Metzler,,,http://papers.nips.cc/paper\_files/paper/2022/hash/892840a6123b5ec99ebaab8be1530fba-Abstract-Conference.html,,,Transformer Memory as a Differentiable Search Index,Transformer Memory as a Differentiable Search Index,http://arxiv.org/pdf/2202.06991v3,"In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,dynamic-retriever2023,\cite{dynamic-retriever2023},DynamicRetriever: A Pre-trained Model-based IR System Without an Explicit Index,,,True,False,Yujia Zhou and Jing Yao and Zhicheng Dou and Ledell Wu and Ji-Rong Wen,,April,https://doi.org/10.1007/s11633-022-1373-9,,Mach. Intell. Res.,DynamicRetriever: A Pre-trained Model-based IR System Without an Explicit Index,[PDF] DynamicRetriever: A Pre-training Model-based IR System ... - arXiv,https://arxiv.org/pdf/2203.00537,"Specifically, we propose a pre-training model-based IR system with neither sparse not dense index, called DynamicRetriever. It is comprised" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,nguyen-2023-generative,\cite{nguyen-2023-generative},Generative Retrieval as Dense Retrieval,http://arxiv.org/abs/2306.11397v1,"Generative retrieval is a promising new neural retrieval paradigm that aims to optimize the retrieval pipeline by performing both indexing and retrieval with a single transformer model. However, this new paradigm faces challenges with updating the index and scaling to large collections. In this paper, we analyze two prominent variants of generative retrieval and show that they can be conceptually viewed as bi-encoders for dense retrieval. Specifically, we analytically demonstrate that the generative retrieval process can be decomposed into dot products between query and document vectors, similar to dense retrieval. This analysis leads us to propose a new variant of generative retrieval, called Tied-Atomic, which addresses the updating and scaling issues by incorporating techniques from dense retrieval. In experiments on two datasets, NQ320k and the full MSMARCO, we confirm that this approach does not reduce retrieval effectiveness while enabling the model to scale to large collections.",True,True,Thong Nguyen and Andrew Yates,,,https://doi.org/10.48550/arXiv.2306.11397,10.48550/ARXIV.2306.11397,CoRR,Generative Retrieval as Dense Retrieval,Generative Retrieval as Dense Retrieval,http://arxiv.org/pdf/2306.11397v1,"Generative retrieval is a promising new neural retrieval paradigm that aims to optimize the retrieval pipeline by performing both indexing and retrieval with a single transformer model. However, this new paradigm faces challenges with updating the index and scaling to large collections. In this paper, we analyze two prominent variants of generative retrieval and show that they can be conceptually viewed as bi-encoders for dense retrieval. Specifically, we analytically demonstrate that the generative retrieval process can be decomposed into dot products between query and document vectors, similar to dense retrieval. This analysis leads us to propose a new variant of generative retrieval, called Tied-Atomic, which addresses the updating and scaling issues by incorporating techniques from dense retrieval. In experiments on two datasets, NQ320k and the full MSMARCO, we confirm that this approach does not reduce retrieval effectiveness while enabling the model to scale to large collections." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,zengScalableEffectiveGenerative2023b,\cite{zengScalableEffectiveGenerative2023b},Scalable and Effective Generative Information Retrieval,http://arxiv.org/abs/2311.09134v1,"Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR- an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To address this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5% MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models.",True,True,Hansi Zeng and Chen Luo and Bowen Jin and Sheikh Muhammad Sarwar and Tianxin Wei and Hamed Zamani,,,https://doi.org/10.1145/3589334.3645477,10.1145/3589334.3645477,,Scalable and Effective Generative Information Retrieval,Scalable and Effective Generative Information Retrieval,http://arxiv.org/pdf/2311.09134v1,"Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR- an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To address this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5% MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,askariFewshotIndexing2024,\cite{askariFewshotIndexing2024},Generative Retrieval with Few-shot Indexing,http://arxiv.org/abs/2408.02152v1,"Existing generative retrieval (GR) approaches rely on training-based indexing, i.e., fine-tuning a model to memorise the associations between a query and the document identifier (docid) of a relevant document. Training-based indexing has three limitations: high training overhead, under-utilization of the pre-trained knowledge of large language models (LLMs), and challenges in adapting to a dynamic document corpus. To address the above issues, we propose a novel few-shot indexing-based GR framework (Few-Shot GR). It has a novel few-shot indexing process, where we prompt an LLM to generate docids for all documents in a corpus, ultimately creating a docid bank for the entire corpus. During retrieval, we feed a query to the same LLM and constrain it to generate a docid within the docid bank created during indexing, and then map the generated docid back to its corresponding document. Few-Shot GR relies solely on prompting an LLM without requiring any training, making it more efficient. Moreover, we devise few-shot indexing with one-to-many mapping to further enhance Few-Shot GR. Experiments show that Few-Shot GR achieves superior performance to state-of-the-art GR methods that require heavy training.",True,True,Arian Askari and Chuan Meng and Mohammad Aliannejadi and Zhaochun Ren and Evangelos Kanoulas and Suzan Verberne,,,https://doi.org/10.48550/arXiv.2408.02152,10.48550/ARXIV.2408.02152,CoRR,Generative Retrieval with Few-shot Indexing,(PDF) Generative Retrieval with Few-shot Indexing - ResearchGate,https://www.researchgate.net/publication/382884626_Generative_Retrieval_with_Few-shot_Indexing,"It has a novel few-shot indexing process, where we prompt an LLM to generate docids for all documents in a corpus, ultimately creating a docid" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,cont-learning-gr2023cikm,\cite{cont-learning-gr2023cikm},Continual Learning for Generative Retrieval over Dynamic Corpora,http://arxiv.org/abs/2308.14968v1,"Generative retrieval (GR) directly predicts the identifiers of relevant documents (i.e., docids) based on a parametric model. It has achieved solid performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a static document collection. In many practical scenarios, however, document collections are dynamic, where new documents are continuously added to the corpus. The ability to incrementally index new documents while preserving the ability to answer queries with both previously and newly indexed relevant documents is vital to applying GR models. In this paper, we address this practical continual learning problem for GR. We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents. Empirical results demonstrate the effectiveness and efficiency of the proposed model.",True,True,"Chen, Jiangui and Zhang, Ruqing and Guo, Jiafeng and de Rijke, Maarten and Chen, Wei and Fan, Yixing and Cheng, Xueqi",,,https://doi.org/10.1145/3583780.3614821,10.1145/3583780.3614821,,Continual Learning for Generative Retrieval over Dynamic Corpora,Continual Learning for Generative Retrieval over Dynamic Corpora,http://arxiv.org/pdf/2308.14968v1,"Generative retrieval (GR) directly predicts the identifiers of relevant documents (i.e., docids) based on a parametric model. It has achieved solid performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a static document collection. In many practical scenarios, however, document collections are dynamic, where new documents are continuously added to the corpus. The ability to incrementally index new documents while preserving the ability to answer queries with both previously and newly indexed relevant documents is vital to applying GR models. In this paper, we address this practical continual learning problem for GR. We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents. Empirical results demonstrate the effectiveness and efficiency of the proposed model." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,liu2024robustnessgenerative,\cite{liu2024robustnessgenerative},On the Robustness of Generative Information Retrieval Models,http://arxiv.org/abs/2412.18768v1,"Generative information retrieval methods retrieve documents by directly generating their identifiers. Much effort has been devoted to developing effective generative IR models. Less attention has been paid to the robustness of these models. It is critical to assess the out-of-distribution (OOD) generalization of generative IR models, i.e., how would such models generalize to new distributions? To answer this question, we focus on OOD scenarios from four perspectives in retrieval problems: (i)query variations; (ii)unseen query types; (iii)unseen tasks; and (iv)corpus expansion. Based on this taxonomy, we conduct empirical studies to analyze the OOD robustness of representative generative IR models against dense retrieval models. Our empirical results indicate that the OOD robustness of generative IR models is in need of improvement. By inspecting the OOD robustness of generative IR models we aim to contribute to the development of more reliable IR models. The code is available at \url{https://github.com/Davion-Liu/GR_OOD}.",True,True,Yu-An Liu and Ruqing Zhang and Jiafeng Guo and Changjiang Zhou and Maarten de Rijke and Xueqi Cheng,,,https://arxiv.org/abs/2412.18768,,,On the Robustness of Generative Information Retrieval Models,On the Robustness of Generative Information Retrieval Models,http://arxiv.org/pdf/2412.18768v1,"Generative information retrieval methods retrieve documents by directly generating their identifiers. Much effort has been devoted to developing effective generative IR models. Less attention has been paid to the robustness of these models. It is critical to assess the out-of-distribution (OOD) generalization of generative IR models, i.e., how would such models generalize to new distributions? To answer this question, we focus on OOD scenarios from four perspectives in retrieval problems: (i)query variations; (ii)unseen query types; (iii)unseen tasks; and (iv)corpus expansion. Based on this taxonomy, we conduct empirical studies to analyze the OOD robustness of representative generative IR models against dense retrieval models. Our empirical results indicate that the OOD robustness of generative IR models is in need of improvement. By inspecting the OOD robustness of generative IR models we aim to contribute to the development of more reliable IR models. The code is available at \url{https://github.com/Davion-Liu/GR_OOD}." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,liuRobustnessGenerativeRetrieval2023,\cite{liuRobustnessGenerativeRetrieval2023},On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective,,,True,False,Yu{-}An Liu and Ruqing Zhang and Jiafeng Guo and Wei Chen and Xueqi Cheng,,,https://doi.org/10.48550/arXiv.2306.12756,10.48550/ARXIV.2306.12756,CoRR,On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective,On the Robustness of Generative Retrieval Models: An Out ...,https://arxiv.org/abs/2306.12756,"**arXiv:2306.12756** (cs) View a PDF of the paper titled On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective, by Yu-An Liu and 4 other authors View a PDF of the paper titled On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective, by Yu-An Liu and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,leeNonparametricDecodingGenerative2023,\cite{leeNonparametricDecodingGenerative2023},Nonparametric Decoding for Generative Retrieval,http://arxiv.org/abs/2210.02068v3,"The generative retrieval model depends solely on the information encoded in its model parameters without external memory, its information capacity is limited and fixed. To overcome the limitation, we propose Nonparametric Decoding (Np Decoding) which can be applied to existing generative retrieval models. Np Decoding uses nonparametric contextualized vocab embeddings (external memory) rather than vanilla vocab embeddings as decoder vocab embeddings. By leveraging the contextualized vocab embeddings, the generative retrieval model is able to utilize both the parametric and nonparametric space. Evaluation over 9 datasets (8 single-hop and 1 multi-hop) in the document retrieval task shows that applying Np Decoding to generative retrieval models significantly improves the performance. We also show that Np Decoding is data- and parameter-efficient, and shows high performance in the zero-shot setting.",True,True,"Lee, Hyunji and Kim, JaeYoung and Chang, Hoyeon and Oh, Hanseok and Yang, Sohee and Karpukhin, Vladimir and Lu, Yi and Seo, Minjoon",,,,,,Nonparametric Decoding for Generative Retrieval,Nonparametric Decoding for Generative Retrieval,http://arxiv.org/pdf/2210.02068v3,"The generative retrieval model depends solely on the information encoded in its model parameters without external memory, its information capacity is limited and fixed. To overcome the limitation, we propose Nonparametric Decoding (Np Decoding) which can be applied to existing generative retrieval models. Np Decoding uses nonparametric contextualized vocab embeddings (external memory) rather than vanilla vocab embeddings as decoder vocab embeddings. By leveraging the contextualized vocab embeddings, the generative retrieval model is able to utilize both the parametric and nonparametric space. Evaluation over 9 datasets (8 single-hop and 1 multi-hop) in the document retrieval task shows that applying Np Decoding to generative retrieval models significantly improves the performance. We also show that Np Decoding is data- and parameter-efficient, and shows high performance in the zero-shot setting." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,yuan2024generative-memory-burden,\cite{yuan2024generative-memory-burden},Generative Dense Retrieval: Memory Can Be a Burden,http://arxiv.org/abs/2401.10487v1,"Generative Retrieval (GR), autoregressively decoding relevant document identifiers given a query, has been shown to perform well under the setting of small-scale corpora. By memorizing the document corpus with model parameters, GR implicitly achieves deep interaction between query and document. However, such a memorizing mechanism faces three drawbacks: (1) Poor memory accuracy for fine-grained features of documents; (2) Memory confusion gets worse as the corpus size increases; (3) Huge memory update costs for new documents. To alleviate these problems, we propose the Generative Dense Retrieval (GDR) paradigm. Specifically, GDR first uses the limited memory volume to achieve inter-cluster matching from query to relevant document clusters. Memorizing-free matching mechanism from Dense Retrieval (DR) is then introduced to conduct fine-grained intra-cluster matching from clusters to relevant documents. The coarse-to-fine process maximizes the advantages of GR's deep interaction and DR's scalability. Besides, we design a cluster identifier constructing strategy to facilitate corpus memory and a cluster-adaptive negative sampling strategy to enhance the intra-cluster mapping ability. Empirical results show that GDR obtains an average of 3.0 R@100 improvement on NQ dataset under multiple settings and has better scalability.",True,True,Peiwen Yuan and Xinglin Wang and Shaoxiong Feng and Boyuan Pan and Yiwei Li and Heda Wang and Xupeng Miao and Kan Li,,,https://aclanthology.org/2024.eacl-long.173,,,Generative Dense Retrieval: Memory Can Be a Burden,Generative Dense Retrieval: Memory Can Be a Burden,http://arxiv.org/pdf/2401.10487v1,"Generative Retrieval (GR), autoregressively decoding relevant document identifiers given a query, has been shown to perform well under the setting of small-scale corpora. By memorizing the document corpus with model parameters, GR implicitly achieves deep interaction between query and document. However, such a memorizing mechanism faces three drawbacks: (1) Poor memory accuracy for fine-grained features of documents; (2) Memory confusion gets worse as the corpus size increases; (3) Huge memory update costs for new documents. To alleviate these problems, we propose the Generative Dense Retrieval (GDR) paradigm. Specifically, GDR first uses the limited memory volume to achieve inter-cluster matching from query to relevant document clusters. Memorizing-free matching mechanism from Dense Retrieval (DR) is then introduced to conduct fine-grained intra-cluster matching from clusters to relevant documents. The coarse-to-fine process maximizes the advantages of GR's deep interaction and DR's scalability. Besides, we design a cluster identifier constructing strategy to facilitate corpus memory and a cluster-adaptive negative sampling strategy to enhance the intra-cluster mapping ability. Empirical results show that GDR obtains an average of 3.0 R@100 improvement on NQ dataset under multiple settings and has better scalability." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,wangNOVOLearnableInterpretable2023,\cite{wangNOVOLearnableInterpretable2023},NOVO: Learnable and Interpretable Document Identifiers for Model-Based IR,,,True,False,"Wang, Zihan and Zhou, Yujia and Tu, Yiteng and Dou, Zhicheng",,,https://doi.org/10.1145/3583780.3614993,10.1145/3583780.3614993,,NOVO: Learnable and Interpretable Document Identifiers for Model-Based IR,Learnable and Interpretable Document Identifiers for Model ...,https://www.researchgate.net/publication/374903378_NOVO_Learnable_and_Interpretable_Document_Identifiers_for_Model-Based_IR,"NOVO [389] introduces learnable continuous N-gram DocIDs, refining embeddings through query denoising and retrieval tasks. LMIndexer [153] generates neural" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,kishoreIncDSI2023,\cite{kishoreIncDSI2023},IncDSI: Incrementally Updatable Document Retrieval,http://arxiv.org/abs/2307.10323v2,"Differentiable Search Index is a recently proposed paradigm for document retrieval, that encodes information about a corpus of documents within the parameters of a neural network and directly maps queries to corresponding documents. These models have achieved state-of-the-art performances for document retrieval across many benchmarks. These kinds of models have a significant limitation: it is not easy to add new documents after a model is trained. We propose IncDSI, a method to add documents in real time (about 20-50ms per document), without retraining the model on the entire dataset (or even parts thereof). Instead we formulate the addition of documents as a constrained optimization problem that makes minimal changes to the network parameters. Although orders of magnitude faster, our approach is competitive with re-training the model on the whole dataset and enables the development of document retrieval systems that can be updated with new information in real-time. Our code for IncDSI is available at https://github.com/varshakishore/IncDSI.",True,True,"Kishore, Varsha and Wan, Chao and Lovelace, Justin and Artzi, Yoav and Weinberger, Kilian Q.",,,,,,IncDSI: Incrementally Updatable Document Retrieval,IncDSI: Incrementally Updatable Document Retrieval,http://arxiv.org/pdf/2307.10323v2,"Differentiable Search Index is a recently proposed paradigm for document retrieval, that encodes information about a corpus of documents within the parameters of a neural network and directly maps queries to corresponding documents. These models have achieved state-of-the-art performances for document retrieval across many benchmarks. These kinds of models have a significant limitation: it is not easy to add new documents after a model is trained. We propose IncDSI, a method to add documents in real time (about 20-50ms per document), without retraining the model on the entire dataset (or even parts thereof). Instead we formulate the addition of documents as a constrained optimization problem that makes minimal changes to the network parameters. Although orders of magnitude faster, our approach is competitive with re-training the model on the whole dataset and enables the development of document retrieval systems that can be updated with new information in real-time. Our code for IncDSI is available at https://github.com/varshakishore/IncDSI." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,mehtaDSIpp2023,\cite{mehtaDSIpp2023},{DSI}++: Updating Transformer Memory with New Documents,,,True,False,"Mehta, Sanket Vaibhav and Gupta, Jai and Tay, Yi and Dehghani, Mostafa and Tran, Vinh Q. and Rao, Jinfeng and Najork, Marc and Strubell, Emma and Metzler, Donald",,,https://aclanthology.org/2023.emnlp-main.510/,10.18653/v1/2023.emnlp-main.510,,{DSI}++: Updating Transformer Memory with New Documents,DSI++: Updating Transformer Memory with New Documents,https://aclanthology.org/2023.emnlp-main.510/,"DSI++: Updating Transformer Memory with New Documents - ACL Anthology Anthology ID:2023.emnlp-main.510 Volume:Proceedings of the 2023 Conference on Empirical Methods in Natural Language ProcessingMonth:December Year:2023 Address:Singapore Editors:Houda Bouamor, Juan Pino, Kalika BaliVenue:EMNLPSIG:Publisher:Association for Computational Linguistics Note:Pages:8198–8213 Language:URL:https://aclanthology.org/2023.emnlp-main.510/DOI:10.18653/v1/2023.emnlp-main.510Bibkey:mehta-etal-2023-dsi Cite (ACL):Sanket Vaibhav Mehta, Jai Gupta, Yi Tay, Mostafa Dehghani, Vinh Q. Association for Computational Linguistics.Cite (Informal):DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)Copy Citation:BibTeX Markdown MODS XML Endnote More options…PDF:https://aclanthology.org/2023.emnlp-main.510.pdfVideo:https://aclanthology.org/2023.emnlp-main.510.mp4 title = ""{DSI}++: Updating Transformer Memory with New Documents"", DSI++: Updating Transformer Memory with New Documents Mehta Houda Juan Kalika DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023) * DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,guoContinualGenerative2024,\cite{guoContinualGenerative2024},CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks,,,True,False,Jiafeng Guo and Changjiang Zhou and Ruqing Zhang and Jiangui Chen and Maarten de Rijke and Yixing Fan and Xueqi Cheng,,,https://arxiv.org/abs/2402.16767,,,CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks,[2402.16767] CorpusBrain++: A Continual Generative Pre-Training ...,https://arxiv.org/abs/2402.16767,"Title:CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks View a PDF of the paper titled CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks, by Jiafeng Guo and 5 other authors View a PDF of the paper titled CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks, by Jiafeng Guo and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle " Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,ahmedNeuroSymbolicLearning2023,\cite{ahmedNeuroSymbolicLearning2023},Semantic Strengthening of Neuro-Symbolic Learning,http://arxiv.org/abs/2302.14207v1,"Numerous neuro-symbolic approaches have recently been proposed typically with the goal of adding symbolic knowledge to the output layer of a neural network. Ideally, such losses maximize the probability that the neural network's predictions satisfy the underlying domain. Unfortunately, this type of probabilistic inference is often computationally infeasible. Neuro-symbolic approaches therefore commonly resort to fuzzy approximations of this probabilistic objective, sacrificing sound probabilistic semantics, or to sampling which is very seldom feasible. We approach the problem by first assuming the constraint decomposes conditioned on the features learned by the network. We iteratively strengthen our approximation, restoring the dependence between the constraints most responsible for degrading the quality of the approximation. This corresponds to computing the mutual information between pairs of constraints conditioned on the network's learned features, and may be construed as a measure of how well aligned the gradients of two distributions are. We show how to compute this efficiently for tractable circuits. We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles, observing that it improves upon the baselines while sidestepping intractability.",True,True,"Ahmed, Kareem and Chang, Kai-Wei and Van den Broeck, Guy",,25--27 Apr,https://proceedings.mlr.press/v206/ahmed23a.html,,,Semantic Strengthening of Neuro-Symbolic Learning,[PDF] Semantic Strengthening of Neuro-Symbolic Learning,https://proceedings.mlr.press/v206/ahmed23a/ahmed23a.pdf,"Neuro-symbolic learning aims to add symbolic knowledge to neural networks, using a probabilistic approach to scale inference while retaining sound semantics." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,mustafaStrcutredOutputPrediction2021,\cite{mustafaStrcutredOutputPrediction2021},Fine-grained Generalization Analysis of Structured Output Prediction,http://arxiv.org/abs/2106.00115v1,"In machine learning we often encounter structured output prediction problems (SOPPs), i.e. problems where the output space admits a rich internal structure. Application domains where SOPPs naturally occur include natural language processing, speech recognition, and computer vision. Typical SOPPs have an extremely large label set, which grows exponentially as a function of the size of the output. Existing generalization analysis implies generalization bounds with at least a square-root dependency on the cardinality $d$ of the label set, which can be vacuous in practice. In this paper, we significantly improve the state of the art by developing novel high-probability bounds with a logarithmic dependency on $d$. Moreover, we leverage the lens of algorithmic stability to develop generalization bounds in expectation without any dependency on $d$. Our results therefore build a solid theoretical foundation for learning in large-scale SOPPs. Furthermore, we extend our results to learning with weakly dependent data.",True,True,"Mustafa, Waleed and Lei, Yunwen and Ledent, Antoine and Kloft, Marius",,,https://doi.org/10.24963/ijcai.2021/391,10.24963/ijcai.2021/391,,Fine-grained Generalization Analysis of Structured Output Prediction,[PDF] Fine-grained Generalization Analysis of Structured Output Prediction,https://www.ijcai.org/proceedings/2021/0391.pdf,We consider two popular methods for structured output prediction: stochastic gradient descent (SGD) and reg- ularized risk minimization (RRM). We adapt the Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,nishinoGeneralizationAnalysisLearning2022a,\cite{nishinoGeneralizationAnalysisLearning2022a},Generalization Analysis on Learning with a Concurrent Verifier,http://arxiv.org/abs/2210.05331v1,"Machine learning technologies have been used in a wide range of practical systems. In practical situations, it is natural to expect the input-output pairs of a machine learning model to satisfy some requirements. However, it is difficult to obtain a model that satisfies requirements by just learning from examples. A simple solution is to add a module that checks whether the input-output pairs meet the requirements and then modifies the model's outputs. Such a module, which we call a {\em concurrent verifier} (CV), can give a certification, although how the generalizability of the machine learning model changes using a CV is unclear. This paper gives a generalization analysis of learning with a CV. We analyze how the learnability of a machine learning model changes with a CV and show a condition where we can obtain a guaranteed hypothesis using a verifier only in the inference time. We also show that typical error bounds based on Rademacher complexity will be no larger than that of the original model when using a CV in multi-class classification and structured prediction settings.",True,True,"Nishino, Masaaki and Nakamura, Kengo and Yasuda, Norihito",,,,,,Generalization Analysis on Learning with a Concurrent Verifier,Generalization Analysis on Learning with a Concurrent Verifier,http://arxiv.org/pdf/2210.05331v1,"Machine learning technologies have been used in a wide range of practical systems. In practical situations, it is natural to expect the input-output pairs of a machine learning model to satisfy some requirements. However, it is difficult to obtain a model that satisfies requirements by just learning from examples. A simple solution is to add a module that checks whether the input-output pairs meet the requirements and then modifies the model's outputs. Such a module, which we call a {\em concurrent verifier} (CV), can give a certification, although how the generalizability of the machine learning model changes using a CV is unclear. This paper gives a generalization analysis of learning with a CV. We analyze how the learnability of a machine learning model changes with a CV and show a condition where we can obtain a guaranteed hypothesis using a verifier only in the inference time. We also show that typical error bounds based on Rademacher complexity will be no larger than that of the original model when using a CV in multi-class classification and structured prediction settings." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,nishinoUnderstandingCV2025,\cite{nishinoUnderstandingCV2025},Understanding the impact of introducing constraints at inference time on generalization error,,,True,False,"Nishino, Masaaki and Nakamura, Kengo and Yasuda, Norihito",,,,,,Understanding the impact of introducing constraints at inference time on generalization error,[PDF] Understanding the Impact of Introducing Constraints at Inference ...,https://raw.githubusercontent.com/mlresearch/v235/main/assets/nishino24a/nishino24a.pdf,This paper analyses how the generalization error bounds change when we only put constraints in the inference time. Our main finding is that a class of loss Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,zhangSurveyControllableText2023,\cite{zhangSurveyControllableText2023},"A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models",http://arxiv.org/abs/2201.05337v5,"Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the limited level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks that require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches, and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize the state-of-the-art CTG techniques from the perspective of Transformer-based PLMs. We hope it can help researchers and practitioners in the related fields to quickly track the academic and technological frontier, providing them with a landscape of the area and a roadmap for future research.",True,True,"Zhang, Hanqing and Song, Haolin and Li, Shaoyu and Zhou, Ming and Song, Dawei",,,https://doi.org/10.1145/3617680,10.1145/3617680,ACM Comput. Surv.,"A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models",A Survey of Controllable Text Generation Using Transformer-based ...,https://dl.acm.org/doi/10.1145/3617680,"This article is closely related to two key aspects: controllable text generation and pre-trained language models, which will be briefly introduced in this" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,mireshghallahControllableTextGeneration2022,\cite{mireshghallahControllableTextGeneration2022},"Mix and Match: Learning-free Controllable Text Generation using Energy Language Models",http://arxiv.org/abs/2203.13299v2,"Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models. We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.",True,True,"Mireshghallah, Fatemehsadat and Goyal, Kartik and Berg-Kirkpatrick, Taylor",,,https://aclanthology.org/2022.acl-long.31/,10.18653/v1/2022.acl-long.31,,"Mix and Match: Learning-free Controllable Text Generation using Energy Language Models",Mix and Match: Learning-free Controllable Text Generation ...,https://cseweb.ucsd.edu/~fmireshg/acl2022_mix_match.pdf,by F Mireshghallah · Cited by 86 — We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,mudgalControlledDecoding2025,\cite{mudgalControlledDecoding2025},Controlled Decoding from Language Models,http://arxiv.org/abs/2310.17022v3,"KL-regularized reinforcement learning (RL) is a popular alignment framework to control the language model responses towards high reward outcomes. We pose a tokenwise RL objective and propose a modular solver for it, called controlled decoding (CD). CD exerts control through a separate prefix scorer module, which is trained to learn a value function for the reward. The prefix scorer is used at inference time to control the generation from a frozen base model, provably sampling from a solution to the RL objective. We empirically demonstrate that CD is effective as a control mechanism on popular benchmarks. We also show that prefix scorers for multiple rewards may be combined at inference time, effectively solving a multi-objective RL problem with no additional training. We show that the benefits of applying CD transfer to an unseen base model with no further tuning as well. Finally, we show that CD can be applied in a blockwise decoding fashion at inference-time, essentially bridging the gap between the popular best-of-K strategy and tokenwise control through reinforcement learning. This makes CD a promising approach for alignment of language models.",True,True,"Mudgal, Sidharth and Lee, Jong and Ganapathy, Harish and Li, YaGuang and Wang, Tao and Huang, Yanping and Chen, Zhifeng and Cheng, Heng-Tze and Collins, Michael and Strohman, Trevor and Chen, Jilin and Beutel, Alex and Beirami, Ahmad",,,,,,Controlled Decoding from Language Models,Controlled Decoding from Language Models,http://arxiv.org/pdf/2310.17022v3,"KL-regularized reinforcement learning (RL) is a popular alignment framework to control the language model responses towards high reward outcomes. We pose a tokenwise RL objective and propose a modular solver for it, called controlled decoding (CD). CD exerts control through a separate prefix scorer module, which is trained to learn a value function for the reward. The prefix scorer is used at inference time to control the generation from a frozen base model, provably sampling from a solution to the RL objective. We empirically demonstrate that CD is effective as a control mechanism on popular benchmarks. We also show that prefix scorers for multiple rewards may be combined at inference time, effectively solving a multi-objective RL problem with no additional training. We show that the benefits of applying CD transfer to an unseen base model with no further tuning as well. Finally, we show that CD can be applied in a blockwise decoding fashion at inference-time, essentially bridging the gap between the popular best-of-K strategy and tokenwise control through reinforcement learning. This makes CD a promising approach for alignment of language models." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,kimCriticGuidedDecoding2023,\cite{kimCriticGuidedDecoding2023},Critic-Guided Decoding for Controlled Text Generation,http://arxiv.org/abs/2212.10938v1,"Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.",True,True,"Kim, Minbeom and Lee, Hwanhee and Yoo, Kang Min and Park, Joonsuk and Lee, Hwaran and Jung, Kyomin",,,https://aclanthology.org/2023.findings-acl.281/,10.18653/v1/2023.findings-acl.281,,Critic-Guided Decoding for Controlled Text Generation,[2212.10938] Critic-Guided Decoding for Controlled Text Generation,https://arxiv.org/abs/2212.10938,"View a PDF of the paper titled Critic-Guided Decoding for Controlled Text Generation, by Minbeom Kim and 5 other authors In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. View a PDF of the paper titled Critic-Guided Decoding for Controlled Text Generation, by Minbeom Kim and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle " Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,chakrabortyPrincipledDecodingLLM2024,\cite{chakrabortyPrincipledDecodingLLM2024},Transfer Q Star: Principled Decoding for LLM Alignment,http://arxiv.org/abs/2405.20495v1,"Aligning foundation models is essential for their safe and trustworthy deployment. However, traditional fine-tuning methods are computationally intensive and require updating billions of model parameters. A promising alternative, alignment via decoding, adjusts the response distribution directly without model updates to maximize a target reward $r$, thus providing a lightweight and adaptable framework for alignment. However, principled decoding methods rely on oracle access to an optimal Q-function ($Q^*$), which is often unavailable in practice. Hence, prior SoTA methods either approximate this $Q^*$ using $Q^{\pi_{\texttt{sft}}}$ (derived from the reference $\texttt{SFT}$ model) or rely on short-term rewards, resulting in sub-optimal decoding performance. In this work, we propose Transfer $Q^*$, which implicitly estimates the optimal value function for a target reward $r$ through a baseline model $\rho_{\texttt{BL}}$ aligned with a baseline reward $\rho_{\texttt{BL}}$ (which can be different from the target reward $r$). Theoretical analyses of Transfer $Q^*$ provide a rigorous characterization of its optimality, deriving an upper bound on the sub-optimality gap and identifying a hyperparameter to control the deviation from the pre-trained reference $\texttt{SFT}$ model based on user needs. Our approach significantly reduces the sub-optimality gap observed in prior SoTA methods and demonstrates superior empirical performance across key metrics such as coherence, diversity, and quality in extensive tests on several synthetic and real datasets.",True,True,"Chakraborty, Souradip and Ghosal, Soumya Suvra and Yin, Ming and Manocha, Dinesh and Wang, Mengdi and Bedi, Amrit Singh and Huang, Furong",,,,,arXiv preprint arXiv:2405.20495,Transfer Q Star: Principled Decoding for LLM Alignment,Transfer Q Star: Principled Decoding for LLM Alignment,http://arxiv.org/pdf/2405.20495v1,"Aligning foundation models is essential for their safe and trustworthy deployment. However, traditional fine-tuning methods are computationally intensive and require updating billions of model parameters. A promising alternative, alignment via decoding, adjusts the response distribution directly without model updates to maximize a target reward $r$, thus providing a lightweight and adaptable framework for alignment. However, principled decoding methods rely on oracle access to an optimal Q-function ($Q^*$), which is often unavailable in practice. Hence, prior SoTA methods either approximate this $Q^*$ using $Q^{\pi_{\texttt{sft}}}$ (derived from the reference $\texttt{SFT}$ model) or rely on short-term rewards, resulting in sub-optimal decoding performance. In this work, we propose Transfer $Q^*$, which implicitly estimates the optimal value function for a target reward $r$ through a baseline model $\rho_{\texttt{BL}}$ aligned with a baseline reward $\rho_{\texttt{BL}}$ (which can be different from the target reward $r$). Theoretical analyses of Transfer $Q^*$ provide a rigorous characterization of its optimality, deriving an upper bound on the sub-optimality gap and identifying a hyperparameter to control the deviation from the pre-trained reference $\texttt{SFT}$ model based on user needs. Our approach significantly reduces the sub-optimality gap observed in prior SoTA methods and demonstrates superior empirical performance across key metrics such as coherence, diversity, and quality in extensive tests on several synthetic and real datasets." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,kimGuaranteedGenerationLarge2024,\cite{kimGuaranteedGenerationLarge2024},Guaranteed Generation from Large Language Models,http://arxiv.org/abs/2410.06716v2,"As large language models (LLMs) are increasingly used across various applications, there is a growing need to control text generation to satisfy specific constraints or requirements. This raises a crucial question: Is it possible to guarantee strict constraint satisfaction in generated outputs while preserving the distribution of the original model as much as possible? We first define the ideal distribution - the one closest to the original model, which also always satisfies the expressed constraint - as the ultimate goal of guaranteed generation. We then state a fundamental limitation, namely that it is impossible to reach that goal through autoregressive training alone. This motivates the necessity of combining training-time and inference-time methods to enforce such guarantees. Based on this insight, we propose GUARD, a simple yet effective approach that combines an autoregressive proposal distribution with rejection sampling. Through GUARD's theoretical properties, we show how controlling the KL divergence between a specific proposal and the target ideal distribution simultaneously optimizes inference speed and distributional closeness. To validate these theoretical concepts, we conduct extensive experiments on two text generation settings with hard-to-satisfy constraints: a lexical constraint scenario and a sentiment reversal scenario. These experiments show that GUARD achieves perfect constraint satisfaction while almost preserving the ideal distribution with highly improved inference efficiency. GUARD provides a principled approach to enforcing strict guarantees for LLMs without compromising their generative capabilities.",True,True,Minbeom Kim and Thibaut Thonet and Jos Rozen and Hwaran Lee and Kyomin Jung and Marc Dymetman,,,https://arxiv.org/abs/2410.06716,,,Guaranteed Generation from Large Language Models,Guaranteed Generation from Large Language Models,http://arxiv.org/pdf/2410.06716v2,"As large language models (LLMs) are increasingly used across various applications, there is a growing need to control text generation to satisfy specific constraints or requirements. This raises a crucial question: Is it possible to guarantee strict constraint satisfaction in generated outputs while preserving the distribution of the original model as much as possible? We first define the ideal distribution - the one closest to the original model, which also always satisfies the expressed constraint - as the ultimate goal of guaranteed generation. We then state a fundamental limitation, namely that it is impossible to reach that goal through autoregressive training alone. This motivates the necessity of combining training-time and inference-time methods to enforce such guarantees. Based on this insight, we propose GUARD, a simple yet effective approach that combines an autoregressive proposal distribution with rejection sampling. Through GUARD's theoretical properties, we show how controlling the KL divergence between a specific proposal and the target ideal distribution simultaneously optimizes inference speed and distributional closeness. To validate these theoretical concepts, we conduct extensive experiments on two text generation settings with hard-to-satisfy constraints: a lexical constraint scenario and a sentiment reversal scenario. These experiments show that GUARD achieves perfect constraint satisfaction while almost preserving the ideal distribution with highly improved inference efficiency. GUARD provides a principled approach to enforcing strict guarantees for LLMs without compromising their generative capabilities." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,honghuaLogicalControl2024,\cite{honghuaLogicalControl2024},Adaptable Logical Control for Large Language Models,http://arxiv.org/abs/2406.13892v2,"Despite the success of Large Language Models (LLMs) on various tasks following human instructions, controlling model generation at inference time poses a persistent challenge. In this paper, we introduce Ctrl-G, an adaptable framework that facilitates tractable and flexible control of LLM generation to reliably follow logical constraints. Ctrl-G combines any production-ready LLM with a Hidden Markov Model, enabling LLM outputs to adhere to logical constraints represented as deterministic finite automata. We show that Ctrl-G, when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of interactive text editing: specifically, for the task of generating text insertions/continuations following logical constraints, Ctrl-G achieves over 30% higher satisfaction rate in human evaluation compared to GPT4. When applied to medium-size language models (e.g., GPT2-large), Ctrl-G also beats its counterparts for constrained generation by large margins on standard benchmarks. Additionally, as a proof-of-concept study, we experiment Ctrl-G on the Grade School Math benchmark to assist LLM reasoning, foreshadowing the application of Ctrl-G, as well as other constrained generation approaches, beyond traditional language generation tasks.",True,True,Honghua Zhang and Po-Nien Kung and Masahiro Yoshida and Guy Van den Broeck and Nanyun Peng,,,https://openreview.net/forum?id=58X9v92zRd,,,Adaptable Logical Control for Large Language Models,Adaptable Logical Control for Large Language Models,http://arxiv.org/pdf/2406.13892v2,"Despite the success of Large Language Models (LLMs) on various tasks following human instructions, controlling model generation at inference time poses a persistent challenge. In this paper, we introduce Ctrl-G, an adaptable framework that facilitates tractable and flexible control of LLM generation to reliably follow logical constraints. Ctrl-G combines any production-ready LLM with a Hidden Markov Model, enabling LLM outputs to adhere to logical constraints represented as deterministic finite automata. We show that Ctrl-G, when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of interactive text editing: specifically, for the task of generating text insertions/continuations following logical constraints, Ctrl-G achieves over 30% higher satisfaction rate in human evaluation compared to GPT4. When applied to medium-size language models (e.g., GPT2-large), Ctrl-G also beats its counterparts for constrained generation by large margins on standard benchmarks. Additionally, as a proof-of-concept study, we experiment Ctrl-G on the Grade School Math benchmark to assist LLM reasoning, foreshadowing the application of Ctrl-G, as well as other constrained generation approaches, beyond traditional language generation tasks." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,zhangTractableControlAutoregressive2023,\cite{zhangTractableControlAutoregressive2023},Tractable Control for Autoregressive Language Generation,http://arxiv.org/abs/2304.07438v4,"Despite the success of autoregressive large language models in text generation, it remains a major challenge to generate text that satisfies complex constraints: sampling from the conditional distribution ${\Pr}(\text{text} | \alpha)$ is intractable for even the simplest lexical constraints $\alpha$. To overcome this challenge, we propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models, which we refer to as GeLaTo (Generating Language with Tractable Constraints). To demonstrate the effectiveness of this framework, we use distilled hidden Markov models, where we can efficiently compute ${\Pr}(\text{text} | \alpha)$, to guide autoregressive generation from GPT2. GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation (e.g., CommonGen), beating various strong baselines by a large margin. Our work not only opens up new avenues for controlling large language models but also motivates the development of more expressive TPMs.",True,True,"Zhang, Honghua and Dang, Meihua and Peng, Nanyun and Van Den Broeck, Guy",,,,,,Tractable Control for Autoregressive Language Generation,Tractable Control for Autoregressive Language Generation,http://arxiv.org/pdf/2304.07438v4,"Despite the success of autoregressive large language models in text generation, it remains a major challenge to generate text that satisfies complex constraints: sampling from the conditional distribution ${\Pr}(\text{text} | \alpha)$ is intractable for even the simplest lexical constraints $\alpha$. To overcome this challenge, we propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models, which we refer to as GeLaTo (Generating Language with Tractable Constraints). To demonstrate the effectiveness of this framework, we use distilled hidden Markov models, where we can efficiently compute ${\Pr}(\text{text} | \alpha)$, to guide autoregressive generation from GPT2. GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation (e.g., CommonGen), beating various strong baselines by a large margin. Our work not only opens up new avenues for controlling large language models but also motivates the development of more expressive TPMs." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,liTreeIndexDenseRetrieval2023,\cite{liTreeIndexDenseRetrieval2023},"Constructing Tree-based Index for Efficient and Effective Dense Retrieval",http://arxiv.org/abs/2304.11943v1,"Recent studies have shown that Dense Retrieval (DR) techniques can significantly improve the performance of first-stage retrieval in IR systems. Despite its empirical effectiveness, the application of DR is still limited. In contrast to statistic retrieval models that rely on highly efficient inverted index solutions, DR models build dense embeddings that are difficult to be pre-processed with most existing search indexing systems. To avoid the expensive cost of brute-force search, the Approximate Nearest Neighbor (ANN) algorithm and corresponding indexes are widely applied to speed up the inference process of DR models. Unfortunately, while ANN can improve the efficiency of DR models, it usually comes with a significant price on retrieval performance. To solve this issue, we propose JTR, which stands for Joint optimization of TRee-based index and query encoding. Specifically, we design a new unified contrastive learning loss to train tree-based index and query encoder in an end-to-end manner. The tree-based negative sampling strategy is applied to make the tree have the maximum heap property, which supports the effectiveness of beam search well. Moreover, we treat the cluster assignment as an optimization problem to update the tree-based index that allows overlapped clustering. We evaluate JTR on numerous popular retrieval benchmarks. Experimental results show that JTR achieves better retrieval performance while retaining high system efficiency compared with widely-adopted baselines. It provides a potential solution to balance efficiency and effectiveness in neural retrieval system designs.",True,True,"Li, Haitao and Ai, Qingyao and Zhan, Jingtao and Mao, Jiaxin and Liu, Yiqun and Liu, Zheng and Cao, Zhao",,,https://doi.org/10.1145/3539618.3591651,10.1145/3539618.3591651,,"Constructing Tree-based Index for Efficient and Effective Dense Retrieval",Constructing Tree-based Index for Efficient and Effective ...,https://arxiv.org/abs/2304.11943,"by H Li · 2023 · Cited by 29 — The tree-based negative sampling strategy is applied to make the tree have the maximum heap property, which supports the effectiveness of beam ...See more" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,zhuTreeRecsys2018,\cite{zhuTreeRecsys2018},Learning Tree-based Deep Model for Recommender Systems,http://arxiv.org/abs/1801.02294v5,"Model-based methods for recommender systems have been studied extensively in recent years. In systems with large corpus, however, the calculation cost for the learnt model to predict all user-item preferences is tremendous, which makes full corpus retrieval extremely difficult. To overcome the calculation barriers, models such as matrix factorization resort to inner product form (i.e., model user-item preference as the inner product of user, item latent factors) and indexes to facilitate efficient approximate k-nearest neighbor searches. However, it still remains challenging to incorporate more expressive interaction forms between user and item features, e.g., interactions through deep neural networks, because of the calculation cost. In this paper, we focus on the problem of introducing arbitrary advanced models to recommender systems with large corpus. We propose a novel tree-based method which can provide logarithmic complexity w.r.t. corpus size even with more expressive models such as deep neural networks. Our main idea is to predict user interests from coarse to fine by traversing tree nodes in a top-down fashion and making decisions for each user-node pair. We also show that the tree structure can be jointly learnt towards better compatibility with users' interest distribution and hence facilitate both training and prediction. Experimental evaluations with two large-scale real-world datasets show that the proposed method significantly outperforms traditional methods. Online A/B test results in Taobao display advertising platform also demonstrate the effectiveness of the proposed method in production environments.",True,True,"Zhu, Han and Li, Xiang and Zhang, Pengye and Li, Guozheng and He, Jie and Li, Han and Gai, Kun",,,https://doi.org/10.1145/3219819.3219826,10.1145/3219819.3219826,,Learning Tree-based Deep Model for Recommender Systems,[PDF] Learning Tree-based Deep Model for Recommender Systems - arXiv,https://arxiv.org/pdf/1801.02294,"In this paper, we focus on the problem of introducing arbitrary advanced models to recommender systems with large corpus. We propose a novel tree-based method" Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,zhuoOptimalTreeModels2020,\cite{zhuoOptimalTreeModels2020},Learning Optimal Tree Models Under Beam Search,http://arxiv.org/abs/2006.15408v1,"Retrieving relevant targets from an extremely large target set under computational limits is a common challenge for information retrieval and recommendation systems. Tree models, which formulate targets as leaves of a tree with trainable node-wise scorers, have attracted a lot of interests in tackling this challenge due to their logarithmic computational complexity in both training and testing. Tree-based deep models (TDMs) and probabilistic label trees (PLTs) are two representative kinds of them. Though achieving many practical successes, existing tree models suffer from the training-testing discrepancy, where the retrieval performance deterioration caused by beam search in testing is not considered in training. This leads to an intrinsic gap between the most relevant targets and those retrieved by beam search with even the optimally trained node-wise scorers. We take a first step towards understanding and analyzing this problem theoretically, and develop the concept of Bayes optimality under beam search and calibration under beam search as general analyzing tools for this purpose. Moreover, to eliminate the discrepancy, we propose a novel algorithm for learning optimal tree models under beam search. Experiments on both synthetic and real data verify the rationality of our theoretical analysis and demonstrate the superiority of our algorithm compared to state-of-the-art methods.",True,True,"Zhuo, Jingwei and Xu, Ziru and Dai, Wei and Zhu, Han and Li, Han and Xu, Jian and Gai, Kun",,,,,,Learning Optimal Tree Models Under Beam Search,Learning Optimal Tree Models Under Beam Search,http://arxiv.org/pdf/2006.15408v1,"Retrieving relevant targets from an extremely large target set under computational limits is a common challenge for information retrieval and recommendation systems. Tree models, which formulate targets as leaves of a tree with trainable node-wise scorers, have attracted a lot of interests in tackling this challenge due to their logarithmic computational complexity in both training and testing. Tree-based deep models (TDMs) and probabilistic label trees (PLTs) are two representative kinds of them. Though achieving many practical successes, existing tree models suffer from the training-testing discrepancy, where the retrieval performance deterioration caused by beam search in testing is not considered in training. This leads to an intrinsic gap between the most relevant targets and those retrieved by beam search with even the optimally trained node-wise scorers. We take a first step towards understanding and analyzing this problem theoretically, and develop the concept of Bayes optimality under beam search and calibration under beam search as general analyzing tools for this purpose. Moreover, to eliminate the discrepancy, we propose a novel algorithm for learning optimal tree models under beam search. Experiments on both synthetic and real data verify the rationality of our theoretical analysis and demonstrate the superiority of our algorithm compared to state-of-the-art methods." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,zhuJointTreeIndexRecsys2019,\cite{zhuJointTreeIndexRecsys2019},"Joint Optimization of Tree-based Index and Deep Model for Recommender Systems",http://arxiv.org/abs/1902.07565v2,"Large-scale industrial recommender systems are usually confronted with computational problems due to the enormous corpus size. To retrieve and recommend the most relevant items to users under response time limits, resorting to an efficient index structure is an effective and practical solution. The previous work Tree-based Deep Model (TDM) \cite{zhu2018learning} greatly improves recommendation accuracy using tree index. By indexing items in a tree hierarchy and training a user-node preference prediction model satisfying a max-heap like property in the tree, TDM provides logarithmic computational complexity w.r.t. the corpus size, enabling the use of arbitrary advanced models in candidate retrieval and recommendation. In tree-based recommendation methods, the quality of both the tree index and the user-node preference prediction model determines the recommendation accuracy for the most part. We argue that the learning of tree index and preference model has interdependence. Our purpose, in this paper, is to develop a method to jointly learn the index structure and user preference prediction model. In our proposed joint optimization framework, the learning of index and user preference prediction model are carried out under a unified performance measure. Besides, we come up with a novel hierarchical user preference representation utilizing the tree index hierarchy. Experimental evaluations with two large-scale real-world datasets show that the proposed method improves recommendation accuracy significantly. Online A/B test results at a display advertising platform also demonstrate the effectiveness of the proposed method in production environments.",True,True,"Zhu, Han and Chang, Daqing and Xu, Ziru and Zhang, Pengye and Li, Xiang and He, Jie and Li, Han and Xu, Jian and Gai, Kun",,,,,,"Joint Optimization of Tree-based Index and Deep Model for Recommender Systems",[PDF] Joint Optimization of Tree-based Index and Deep Model for ...,http://papers.neurips.cc/paper/8652-joint-optimization-of-tree-based-index-and-deep-model-for-recommender-systems.pdf,"In tree-based recommendation methods, the quality of both the tree index and the user-node preference prediction model determines the recommendation accuracy." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,zengPlanningAheadGenerative2024,\cite{zengPlanningAheadGenerative2024},"Planning Ahead in Generative Retrieval: Guiding Autoregressive Generation through Simultaneous Decoding",http://arxiv.org/abs/2404.14600v1,"This paper introduces PAG-a novel optimization and decoding approach that guides autoregressive generation of document identifiers in generative retrieval models through simultaneous decoding. To this aim, PAG constructs a set-based and sequential identifier for each document. Motivated by the bag-of-words assumption in information retrieval, the set-based identifier is built on lexical tokens. The sequential identifier, on the other hand, is obtained via quantizing relevance-based representations of documents. Extensive experiments on MSMARCO and TREC Deep Learning Track data reveal that PAG outperforms the state-of-the-art generative retrieval model by a large margin (e.g., 15.6% MRR improvements on MS MARCO), while achieving 22x speed up in terms of query latency.",True,True,Hansi Zeng and Chen Luo and Hamed Zamani,,,https://doi.org/10.1145/3626772.3657746,10.1145/3626772.3657746,,"Planning Ahead in Generative Retrieval: Guiding Autoregressive Generation through Simultaneous Decoding",[2404.14600] Planning Ahead in Generative Retrieval,https://arxiv.org/abs/2404.14600,by H Zeng · 2024 · Cited by 21 — This paper introduces PAG-a novel optimization and decoding approach that guides autoregressive generation of document identifiers in generative retrieval Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,liCorpusLM2024,\cite{liCorpusLM2024},"CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks",http://arxiv.org/abs/2402.01176v2,"Large language models (LLMs) have gained significant attention in various fields but prone to hallucination, especially in knowledge-intensive (KI) tasks. To address this, retrieval-augmented generation (RAG) has emerged as a popular solution to enhance factual accuracy. However, traditional retrieval modules often rely on large document index and disconnect with generative tasks. With the advent of generative retrieval (GR), language models can retrieve by directly generating document identifiers (DocIDs), offering superior performance in retrieval tasks. However, the potential relationship between GR and downstream tasks remains unexplored. In this paper, we propose \textbf{CorpusLM}, a unified language model that leverages external corpus to tackle various knowledge-intensive tasks by integrating generative retrieval, closed-book generation, and RAG through a unified greedy decoding process. We design the following mechanisms to facilitate effective retrieval and generation, and improve the end-to-end effectiveness of KI tasks: (1) We develop a ranking-oriented DocID list generation strategy, which refines GR by directly learning from a DocID ranking list, to improve retrieval quality. (2) We design a continuous DocIDs-References-Answer generation strategy, which facilitates effective and efficient RAG. (3) We employ well-designed unsupervised DocID understanding tasks, to comprehend DocID semantics and their relevance to downstream tasks. We evaluate our approach on the widely used KILT benchmark with two variants of backbone models, i.e., T5 and Llama2. Experimental results demonstrate the superior performance of our models in both retrieval and downstream tasks.",True,True,Xiaoxi Li and Zhicheng Dou and Yujia Zhou and Fangchao Liu,,,https://doi.org/10.1145/3626772.3657778,10.1145/3626772.3657778,,"CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks",CorpusLM: Towards a Unified Language Model on Corpus ...,https://dl.acm.org/doi/10.1145/3626772.3657778,"In this paper, we propose CorpusLM, a unified language model that leverages external corpus to tackle various knowledge-intensive tasks." Constrained Auto-Regressive Decoding Constrains Generative Retrieval,2504.09935v1,liUnigen2024,\cite{liUnigen2024},"UniGen: A Unified Generative Framework for Retrieval and Question Answering with Large Language Models",http://arxiv.org/abs/2312.11036v1,"Generative information retrieval, encompassing two major tasks of Generative Document Retrieval (GDR) and Grounded Answer Generation (GAR), has gained significant attention in the area of information retrieval and natural language processing. Existing methods for GDR and GAR rely on separate retrieval and reader modules, which hinder simultaneous optimization. To overcome this, we present \textbf{UniGen}, a \textbf{Uni}fied \textbf{Gen}erative framework for retrieval and question answering that integrates both tasks into a single generative model leveraging the capabilities of large language models. UniGen employs a shared encoder and two distinct decoders for generative retrieval and question answering. To facilitate the learning of both tasks, we introduce connectors, generated by large language models, to bridge the gaps between query inputs and generation targets, as well as between document identifiers and answers. Furthermore, we propose an iterative enhancement strategy that leverages generated answers and retrieved documents to iteratively improve both tasks. Through extensive experiments on the MS MARCO and NQ datasets, we demonstrate the effectiveness of UniGen, showcasing its superior performance in both the retrieval and the question answering tasks.",True,True,Xiaoxi Li and Yujia Zhou and Zhicheng Dou,,,https://doi.org/10.1609/aaai.v38i8.28714,10.1609/AAAI.V38I8.28714,,"UniGen: A Unified Generative Framework for Retrieval and Question Answering with Large Language Models",UniGen: A Unified Generative Framework for Retrieval and Question ...,https://underline.io/lecture/93708-unigen-a-unified-generative-framework-for-retrieval-and-question-answering-with-large-language-models,UniGen: A Unified Generative Framework for Retrieval and Question Answering with Large Language Models