parent_paper_title
stringclasses 63
values | parent_paper_arxiv_id
stringclasses 63
values | citation_shorthand
stringlengths 2
56
| raw_citation_text
stringlengths 9
63
| cited_paper_title
stringlengths 5
161
| cited_paper_arxiv_link
stringlengths 32
37
⌀ | cited_paper_abstract
stringlengths 406
1.92k
⌀ | has_metadata
bool 1
class | is_arxiv_paper
bool 2
classes | bib_paper_authors
stringlengths 2
2.44k
⌀ | bib_paper_year
float64 1.97k
2.03k
⌀ | bib_paper_month
stringclasses 16
values | bib_paper_url
stringlengths 20
116
⌀ | bib_paper_doi
stringclasses 269
values | bib_paper_journal
stringlengths 3
148
⌀ | original_title
stringlengths 5
161
| search_res_title
stringlengths 4
122
| search_res_url
stringlengths 22
267
| search_res_content
stringlengths 19
1.92k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
NBERw21340
|
\cite{NBERw21340}
|
Effective Policy for Reducing Inequality? The Earned Income Tax Credit and the Distribution of Income
| null | null | true | false |
Hoynes, Hilary W and Patel, Ankur J
| 2,015 |
July
|
http://www.nber.org/papers/w21340
|
10.3386/w21340
| null |
Effective Policy for Reducing Inequality? The Earned Income Tax Credit and the Distribution of Income
|
Effective Policy for Reducing Inequality? The Earned Income
|
https://ideas.repec.org/p/nbr/nberwo/21340.html
|
Our results show that a policy-induced $1000 increase in the EITC leads to a 7.3 percentage point increase in employment and a 9.4 percentage point reduction
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
NBERw21211
|
\cite{NBERw21211}
|
The Earned Income Tax Credit (EITC)
| null | null | true | false |
Nichols, Austin and Rothstein, Jesse
| 2,015 |
May
|
http://www.nber.org/papers/w21211
|
10.3386/w21211
| null |
The Earned Income Tax Credit (EITC)
|
What is the earned income tax credit? - Tax Policy Center
|
https://taxpolicycenter.org/briefing-book/what-earned-income-tax-credit
|
The earned income tax credit (EITC) provides substantial support to low- and moderate-income working parents who claim a qualifying child.
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
Foo2019ProcessAC
|
\cite{Foo2019ProcessAC}
|
Process and Critical Approaches to Solving the Systemic Climate Change Governance Problem
| null | null | true | false |
Check Woo Foo
| 2,019 | null |
https://api.semanticscholar.org/CorpusID:235319207
| null |
Politics \& Energy eJournal
|
Process and Critical Approaches to Solving the Systemic Climate Change Governance Problem
|
Process and Critical Approaches to Solving the Systemic Climate ...
|
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3608501
|
The most important and urgent task, besides avoiding nuclear war, is abatement of the existential threat of systemic climate change,
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
Patjoshi2015DesignAD
|
\cite{Patjoshi2015DesignAD}
|
Design and Development of Advanced Control strategies for Power Quality Enhancement at Distribution Level
| null | null | true | false |
Rajesh Kumar Patjoshi
| 2,015 | null |
https://api.semanticscholar.org/CorpusID:112918597
| null | null |
Design and Development of Advanced Control strategies for Power Quality Enhancement at Distribution Level
|
(PDF) Advanced Control Strategies for UPQC to Improve ...
|
https://www.researchgate.net/publication/279289697_Advanced_Control_Strategies_for_UPQC_to_Improve_Power_Quality_of_Power_Distribution_Systems
|
PDF | On Jul 2, 2014, Quoc Nam Trinh published Advanced Control Strategies for UPQC to Improve Power Quality of Power Distribution Systems
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
10.1257/jep.25.4.165
|
\cite{10.1257/jep.25.4.165}
|
The Case for a Progressive Tax: From Basic Research to Policy Recommendations
| null | null | true | false |
Diamond, Peter and Saez, Emmanuel
| 2,011 |
December
|
https://www.aeaweb.org/articles?id=10.1257/jep.25.4.165
|
10.1257/jep.25.4.165
|
Journal of Economic Perspectives
|
The Case for a Progressive Tax: From Basic Research to Policy Recommendations
|
The Case for a Progressive Tax
|
https://economics.mit.edu/sites/default/files/2022-09/jep.25.4.165.pdf
|
Therefore, optimal income tax theory is fi rst a normative theory that shows how a social welfare objective combines with constraints arising from theory that shows how a social welfare objective combines with constraints arising from limits on resources and behavioral responses to taxation in order to derive specifi c limits on resources and behavioral responses to taxation in order to derive specifi c The Case for a Progressive Tax: From Basic Research to Policy Recommendations † ■ ■ Peter Diamond is Professor Emeritus of Economics, Massachusetts Institute of Tech-nology, Cambridge Massachusetts. doi=10.1257/jep.25.4.165 Peter Diamond and Emmanuel Saez 166 Journal of Economic Perspectives tax policy recommendations. In addition, optimal income tax theory can be used to tax policy recommendations.
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
10.2307/2296779
|
\cite{10.2307/2296779}
|
An Exploration in the Theory of Optimum Income Taxation12
| null | null | true | false |
Mirrlees, J. A.
| 1,971 |
04
|
https://doi.org/10.2307/2296779
|
10.2307/2296779
|
The Review of Economic Studies
|
An Exploration in the Theory of Optimum Income Taxation12
|
Exploration in the Theory of Optimum Income Taxation12
|
https://academic.oup.com/restud/article-abstract/38/2/175/1527903
|
by JA Mirrlees · 1971 · Cited by 7415 — J. A. Mirrlees; An Exploration in the Theory of Optimum Income Taxation12, The Review of Economic Studies, Volume 38, Issue 2, 1 April 1971, Pages 175–208,
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27
|
\cite{RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27}
|
Optimal Taxation and Public Production: I--Production Efficiency
| null | null | true | false |
Diamond, Peter and Mirrlees, James
| 1,971 | null |
https://EconPapers.repec.org/RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27
| null |
American Economic Review
|
Optimal Taxation and Public Production: I--Production Efficiency
|
[PDF] Optimal Taxation and Public Production I: Production Efficiency
|
http://hassler-j.iies.su.se/Courses/DynPubFin/Papers/DiamondMirrlees.pdf
|
Theories of optimal production in a planned economy have usually assumed that the tax system can allow the govern- ment to achieve any desired redistribution of
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
10.1111/1467-937X.00166
|
\cite{10.1111/1467-937X.00166}
|
Using Elasticities to Derive Optimal Income Tax Rates
| null | null | true | false |
Saez, Emmanuel
| 2,001 |
01
|
https://doi.org/10.1111/1467-937X.00166
|
10.1111/1467-937X.00166
|
The Review of Economic Studies
|
Using Elasticities to Derive Optimal Income Tax Rates
|
Using Elasticities to Derive Optimal Income Tax Rates
|
https://academic.oup.com/restud/article/68/1/205/1568609
|
by E Saez · 2001 · Cited by 1885 — This paper derives optimal income tax formulas using compensated and uncompensated elasticities of earnings with respect to tax rates.
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
10.1257/pol.6.1.230
|
\cite{10.1257/pol.6.1.230}
|
Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities
| null | null | true | false |
Piketty, Thomas and Saez, Emmanuel and Stantcheva, Stefanie
| 2,014 |
February
|
https://www.aeaweb.org/articles?id=10.1257/pol.6.1.230
|
10.1257/pol.6.1.230
|
American Economic Journal: Economic Policy
|
Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities
|
Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities
|
https://www.nber.org/papers/w17616
|
This paper presents a model of optimal labor income taxation where top incomes respond to marginal tax rates through three channels.
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
10.1257/pol.20180033
|
\cite{10.1257/pol.20180033}
|
Optimal Income Taxation with Unemployment and Wage Responses: A Sufficient Statistics Approach
| null | null | true | false |
Kroft, Kory and Kucko, Kavan and Lehmann, Etienne and Schmieder, Johannes
| 2,020 |
February
|
https://www.aeaweb.org/articles?id=10.1257/pol.20180033
|
10.1257/pol.20180033
|
American Economic Journal: Economic Policy
|
Optimal Income Taxation with Unemployment and Wage Responses: A Sufficient Statistics Approach
|
Optimal Income Taxation with Unemployment and Wage Responses
|
https://www.aeaweb.org/articles?id=10.1257/pol.20180033
|
We derive a sufficient statistics tax formula in a model that incorporates unemployment and endogenous wages to study the shape of the optimal income tax. Key
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
zheng2020aieconomistimprovingequality
|
\cite{zheng2020aieconomistimprovingequality}
|
The AI Economist: Improving Equality and Productivity with AI-Driven Tax
Policies
|
http://arxiv.org/abs/2004.13332v1
|
Tackling real-world socio-economic challenges requires designing and testing
economic policies. However, this is hard in practice, due to a lack of
appropriate (micro-level) economic data and limited opportunity to experiment.
In this work, we train social planners that discover tax policies in dynamic
economies that can effectively trade-off economic equality and productivity. We
propose a two-level deep reinforcement learning approach to learn dynamic tax
policies, based on economic simulations in which both agents and a government
learn and adapt. Our data-driven approach does not make use of economic
modeling assumptions, and learns from observational data alone. We make four
main contributions. First, we present an economic simulation environment that
features competitive pressures and market dynamics. We validate the simulation
by showing that baseline tax systems perform in a way that is consistent with
economic theory, including in regard to learned agent behaviors and
specializations. Second, we show that AI-driven tax policies improve the
trade-off between equality and productivity by 16% over baseline policies,
including the prominent Saez tax framework. Third, we showcase several emergent
features: AI-driven tax policies are qualitatively different from baselines,
setting a higher top tax rate and higher net subsidies for low incomes.
Moreover, AI-driven tax policies perform strongly in the face of emergent
tax-gaming strategies learned by AI agents. Lastly, AI-driven tax policies are
also effective when used in experiments with human participants. In experiments
conducted on MTurk, an AI tax policy provides an equality-productivity
trade-off that is similar to that provided by the Saez framework along with
higher inverse-income weighted social welfare.
| true | true |
Stephan Zheng and Alexander Trott and Sunil Srinivasa and Nikhil Naik and Melvin Gruesbeck and David C. Parkes and Richard Socher
| 2,020 | null |
https://arxiv.org/abs/2004.13332
| null | null |
The AI Economist: Improving Equality and Productivity with AI-Driven Tax
Policies
|
[PDF] Improving Equality and Productivity with AI-Driven Tax Policies - arXiv
|
http://arxiv.org/pdf/2004.13332
|
The AI Economist uses AI to discover tax policies that improve the trade-off between equality and productivity, achieving a 16% improvement
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
NBERc14009
|
\cite{NBERc14009}
|
The Impact of Machine Learning on Economics
| null | null | true | false |
Susan Athey
| 2,018 |
January
|
http://www.nber.org/chapters/c14009
| null | null |
The Impact of Machine Learning on Economics
|
The Impact of Machine Learning on Economics
|
https://www.gsb.stanford.edu/faculty-research/publications/impact-machine-learning-economics
|
# The Impact of Machine Learning on Economics This paper provides an assessment of the early contributions of machine learning to economics, as well as predictions about its future contributions. It begins by briefly overviewing some themes from the literature on machine learning, and then draws some contrasts with traditional approaches to estimating the impact of counterfactual policies in economics. Next, we review some of the initial “off-the-shelf” applications of machine learning to economics, including applications in analyzing text and images. Finally, we overview a set of broader predictions about the future impact of machine learning on economics, including its impacts on the nature of collaboration, funding, research tools, and research questions. ## Footer contact links ## Footer 1 ## Footer 2 ## Footer legal links
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
AxtellFarmer2022
|
\cite{AxtellFarmer2022}
|
Agent Based Modeling in Economics and Finance: Past, Present, and Future
| null | null | true | false |
Axtell, R. and Farmer, J.
| 2,022 | null | null | null |
Journal of Economic Literature
|
Agent Based Modeling in Economics and Finance: Past, Present, and Future
|
[PDF] Agent-Based Modeling in Economics and Finance: Past, Present ...
|
https://complexityhandbook.uni-hohenheim.de/fileadmin/einrichtungen/complexityhandbook/AXTELL_Robert.pdf
|
Abstract. Agent-based modeling is a novel computational methodology for representing the behavior of individuals in order to study social phenomena.
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
DelliGatti2018
|
\cite{DelliGatti2018}
|
Contents
| null | null | true | false |
Delli Gatti, Domenico and Fagiolo, Giorgio and Gallegati, Mauro and Richiardi, Matteo and Russo, Alberto
| 2,018 | null | null | null | null |
Contents
|
CONTENTS | definition in the Cambridge English Dictionary
|
https://dictionary.cambridge.org/us/dictionary/english/contents
|
everything that is contained within something: contents of The contents of his bag spilled all over the floor. He didn't need to open the box because
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
shen2025phyxdoesmodelwits
|
\cite{shen2025phyxdoesmodelwits}
|
PhyX: Does Your Model Have the "Wits" for Physical Reasoning?
|
http://arxiv.org/abs/2505.15929v2
|
Existing benchmarks fail to capture a crucial aspect of intelligence:
physical reasoning, the integrated ability to combine domain knowledge,
symbolic reasoning, and understanding of real-world constraints. To address
this gap, we introduce PhyX: the first large-scale benchmark designed to assess
models capacity for physics-grounded reasoning in visual scenarios. PhyX
includes 3K meticulously curated multimodal questions spanning 6 reasoning
types across 25 sub-domains and 6 core physics domains: thermodynamics,
electromagnetism, mechanics, modern physics, optics, and wave\&acoustics. In
our comprehensive evaluation, even state-of-the-art models struggle
significantly with physical reasoning. GPT-4o, Claude3.7-Sonnet, and
GPT-o4-mini achieve only 32.5%, 42.2%, and 45.8% accuracy
respectively-performance gaps exceeding 29% compared to human experts. Our
analysis exposes critical limitations in current models: over-reliance on
memorized disciplinary knowledge, excessive dependence on mathematical
formulations, and surface-level visual pattern matching rather than genuine
physical understanding. We provide in-depth analysis through fine-grained
statistics, detailed case studies, and multiple evaluation paradigms to
thoroughly examine physical reasoning capabilities. To ensure reproducibility,
we implement a compatible evaluation protocol based on widely-used toolkits
such as VLMEvalKit, enabling one-click evaluation. More details are available
on our project page: https://phyx-bench.github.io/.
| true | true |
Hui Shen and Taiqiang Wu and Qi Han and Yunta Hsieh and Jizhou Wang and Yuyue Zhang and Yuxin Cheng and Zijian Hao and Yuansheng Ni and Xin Wang and Zhongwei Wan and Kai Zhang and Wendong Xu and Jing Xiong and Ping Luo and Wenhu Chen and Chaofan Tao and Zhuoqing Mao and Ngai Wong
| 2,025 | null |
https://arxiv.org/abs/2505.15929
| null | null |
PhyX: Does Your Model Have the "Wits" for Physical Reasoning?
|
PhyX: Does Your Model Have the "Wits" for Physical Reasoning?
|
http://arxiv.org/pdf/2505.15929v2
|
Existing benchmarks fail to capture a crucial aspect of intelligence:
physical reasoning, the integrated ability to combine domain knowledge,
symbolic reasoning, and understanding of real-world constraints. To address
this gap, we introduce PhyX: the first large-scale benchmark designed to assess
models capacity for physics-grounded reasoning in visual scenarios. PhyX
includes 3K meticulously curated multimodal questions spanning 6 reasoning
types across 25 sub-domains and 6 core physics domains: thermodynamics,
electromagnetism, mechanics, modern physics, optics, and wave\&acoustics. In
our comprehensive evaluation, even state-of-the-art models struggle
significantly with physical reasoning. GPT-4o, Claude3.7-Sonnet, and
GPT-o4-mini achieve only 32.5%, 42.2%, and 45.8% accuracy
respectively-performance gaps exceeding 29% compared to human experts. Our
analysis exposes critical limitations in current models: over-reliance on
memorized disciplinary knowledge, excessive dependence on mathematical
formulations, and surface-level visual pattern matching rather than genuine
physical understanding. We provide in-depth analysis through fine-grained
statistics, detailed case studies, and multiple evaluation paradigms to
thoroughly examine physical reasoning capabilities. To ensure reproducibility,
we implement a compatible evaluation protocol based on widely-used toolkits
such as VLMEvalKit, enabling one-click evaluation. More details are available
on our project page: https://phyx-bench.github.io/.
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
zhao2024competeaiunderstandingcompetitiondynamics
|
\cite{zhao2024competeaiunderstandingcompetitiondynamics}
|
CompeteAI: Understanding the Competition Dynamics in Large Language
Model-based Agents
|
http://arxiv.org/abs/2310.17512v2
|
Large language models (LLMs) have been widely used as agents to complete
different tasks, such as personal assistance or event planning. While most of
the work has focused on cooperation and collaboration between agents, little
work explores competition, another important mechanism that promotes the
development of society and economy. In this paper, we seek to examine the
competition dynamics in LLM-based agents. We first propose a general framework
for studying the competition between agents. Then, we implement a practical
competitive environment using GPT-4 to simulate a virtual town with two types
of agents, restaurant agents and customer agents. Specifically, the restaurant
agents compete with each other to attract more customers, where competition
encourages them to transform, such as cultivating new operating strategies.
Simulation experiments reveal several interesting findings at the micro and
macro levels, which align well with existing market and sociological theories.
We hope that the framework and environment can be a promising testbed to study
competition that fosters understanding of society. Code is available at:
https://github.com/microsoft/competeai.
| true | true |
Qinlin Zhao and Jindong Wang and Yixuan Zhang and Yiqiao Jin and Kaijie Zhu and Hao Chen and Xing Xie
| 2,024 | null |
https://arxiv.org/abs/2310.17512
| null | null |
CompeteAI: Understanding the Competition Dynamics in Large Language
Model-based Agents
|
CompeteAI: Understanding the Competition Dynamics in Large ...
|
https://arxiv.org/abs/2310.17512
|
In this paper, we seek to examine the competition dynamics in LLM-based agents. We first propose a general framework for studying the competition between
|
TaxAgent: How Large Language Model Designs Fiscal Policy
|
2506.02838v1
|
nie2024surveylargelanguagemodels
|
\cite{nie2024surveylargelanguagemodels}
|
A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges
| null | null | true | false |
Yuqi Nie and Yaxuan Kong and Xiaowen Dong and John M. Mulvey and H. Vincent Poor and Qingsong Wen and Stefan Zohren
| 2,024 | null |
https://arxiv.org/abs/2406.11903
| null | null |
A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges
|
A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges
|
http://arxiv.org/pdf/2406.11903v1
|
Recent advances in large language models (LLMs) have unlocked novel
opportunities for machine learning applications in the financial domain. These
models have demonstrated remarkable capabilities in understanding context,
processing vast amounts of data, and generating human-preferred contents. In
this survey, we explore the application of LLMs on various financial tasks,
focusing on their potential to transform traditional practices and drive
innovation. We provide a discussion of the progress and advantages of LLMs in
financial contexts, analyzing their advanced technologies as well as
prospective capabilities in contextual understanding, transfer learning
flexibility, complex emotion detection, etc. We then highlight this survey for
categorizing the existing literature into key application areas, including
linguistic tasks, sentiment analysis, financial time series, financial
reasoning, agent-based modeling, and other applications. For each application
area, we delve into specific methodologies, such as textual analysis,
knowledge-based analysis, forecasting, data augmentation, planning, decision
support, and simulations. Furthermore, a comprehensive collection of datasets,
model assets, and useful codes associated with mainstream applications are
presented as resources for the researchers and practitioners. Finally, we
outline the challenges and opportunities for future research, particularly
emphasizing a number of distinctive aspects in this field. We hope our work can
help facilitate the adoption and further development of LLMs in the financial
sector.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
vllm
|
\cite{vllm}
|
Efficient Memory Management for Large Language Model Serving with
PagedAttention
|
http://arxiv.org/abs/2309.06180v1
|
High throughput serving of large language models (LLMs) requires batching
sufficiently many requests at a time. However, existing systems struggle
because the key-value cache (KV cache) memory for each request is huge and
grows and shrinks dynamically. When managed inefficiently, this memory can be
significantly wasted by fragmentation and redundant duplication, limiting the
batch size. To address this problem, we propose PagedAttention, an attention
algorithm inspired by the classical virtual memory and paging techniques in
operating systems. On top of it, we build vLLM, an LLM serving system that
achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV
cache within and across requests to further reduce memory usage. Our
evaluations show that vLLM improves the throughput of popular LLMs by
2-4$\times$ with the same level of latency compared to the state-of-the-art
systems, such as FasterTransformer and Orca. The improvement is more pronounced
with longer sequences, larger models, and more complex decoding algorithms.
vLLM's source code is publicly available at
https://github.com/vllm-project/vllm
| true | true |
Woosuk Kwon and
Zhuohan Li and
Siyuan Zhuang and
Ying Sheng and
Lianmin Zheng and
Cody Hao Yu and
Joseph Gonzalez and
Hao Zhang and
Ion Stoica
| 2,023 | null |
https://doi.org/10.1145/3600006.3613165
|
10.1145/3600006.3613165
| null |
Efficient Memory Management for Large Language Model Serving with
PagedAttention
|
Efficient Memory Management for Large Language Model ...
|
https://arxiv.org/pdf/2309.06180
|
Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon 1,∗ Zhuohan Li 1,∗ Siyuan Zhuang 1 Ying Sheng 1,2 Lianmin Zheng 1 Cody Hao Yu 3 Joseph E. Gonzalez 1 Hao Zhang 4 Ion Stoica 1 1 UC Berkeley 2Stanford University 3Independent Researcher 4UC San Diego Abstract High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. To address this problem, we propose PagedAttention, an attention al-gorithm inspired by the classical virtual memory and pag-ing techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce mem-ory usage. To address the above limitations, we propose PagedAt-tention , an attention algorithm inspired by the operating system’s (OS) solution to memory fragmentation and shar-ing: virtual memory with paging . In this work, we build vLLM , a high-throughput distributed LLM serving engine on top of PagedAttention that achieves near-zero waste in KV cache memory.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
chunkattention
|
\cite{chunkattention}
|
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and
Two-Phase Partition
|
http://arxiv.org/abs/2402.15220v4
|
Self-attention is an essential component of large language models (LLM) but a
significant source of inference latency for long sequences. In multi-tenant LLM
serving scenarios, the compute and memory operation cost of self-attention can
be optimized by using the probability that multiple LLM requests have shared
system prompts in prefixes. In this paper, we introduce ChunkAttention, a
prefix-aware self-attention module that can detect matching prompt prefixes
across multiple requests and share their key/value tensors in memory at runtime
to improve the memory utilization of KV cache. This is achieved by breaking
monolithic key/value tensors into smaller chunks and structuring them into the
auxiliary prefix tree. Consequently, on top of the prefix-tree based KV cache,
we design an efficient self-attention kernel, where a two-phase partition
algorithm is implemented to improve the data locality during self-attention
computation in the presence of shared system prompts. Experiments show that
ChunkAttention can speed up the self-attention kernel by 3.2-4.8$\times$
compared to the state-of-the-art implementation, with the length of the system
prompt ranging from 1024 to 4096.
| true | true |
Lu Ye and Ze Tao and Yong Huang and Yang Li
| 2,024 | null |
https://aclanthology.org/2024.acl-long.623
| null | null |
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and
Two-Phase Partition
|
[PDF] Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase ...
|
https://aclanthology.org/2024.acl-long.623.pdf
|
ChunkAttention is a prefix-aware self-attention module that uses a prefix-aware KV cache and two-phase partition to improve memory utilization
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
cachedattention
|
\cite{cachedattention}
|
Cost-Efficient Large Language Model Serving for Multi-turn Conversations
with CachedAttention
| null | null | true | false |
Bin Gao and
Zhuomin He and
Puru Sharma and
Qingxuan Kang and
Djordje Jevdjic and
Junbo Deng and
Xingkun Yang and
Zhou Yu and
Pengfei Zuo
| 2,024 | null |
https://www.usenix.org/conference/atc24/presentation/gao-bin-cost
| null | null |
Cost-Efficient Large Language Model Serving for Multi-turn Conversations
with CachedAttention
|
Cost-Efficient Large Language Model Serving for Multi-turn ... - arXiv
|
https://arxiv.org/abs/2403.19708
|
This paper proposes CachedAttention, a new attention mechanism that enables reuse of KV caches across multi-turn conversations, significantly reducing the
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
promptcache
|
\cite{promptcache}
|
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
|
http://arxiv.org/abs/2311.04934v2
|
We present Prompt Cache, an approach for accelerating inference for large
language models (LLM) by reusing attention states across different LLM prompts.
Many input prompts have overlapping text segments, such as system messages,
prompt templates, and documents provided for context. Our key insight is that
by precomputing and storing the attention states of these frequently occurring
text segments on the inference server, we can efficiently reuse them when these
segments appear in user prompts. Prompt Cache employs a schema to explicitly
define such reusable text segments, called prompt modules. The schema ensures
positional accuracy during attention state reuse and provides users with an
interface to access cached states in their prompt. Using a prototype
implementation, we evaluate Prompt Cache across several LLMs. We show that
Prompt Cache significantly reduce latency in time-to-first-token, especially
for longer prompts such as document-based question answering and
recommendations. The improvements range from 8x for GPU-based inference to 60x
for CPU-based inference, all while maintaining output accuracy and without the
need for model parameter modifications.
| true | true |
In Gim and
Guojun Chen and
Seung{-}Seob Lee and
Nikhil Sarda and
Anurag Khandelwal and
Lin Zhong
| 2,024 | null |
https://proceedings.mlsys.org/paper\_files/paper/2024/hash/a66caa1703fe34705a4368c3014c1966-Abstract-Conference.html
| null | null |
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
|
[PDF] Prompt Cache: Modular Attention Reuse for Low-Latency Inference
|
https://proceedings.mlsys.org/paper_files/paper/2024/file/a66caa1703fe34705a4368c3014c1966-Paper-Conference.pdf
|
Prompt Cache accelerates LLM inference by reusing attention states of frequently occurring text segments, precomputed and stored in memory.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
sglang
|
\cite{sglang}
|
Efficiently Programming Large Language Models using SGLang
| null | null | true | false |
Lianmin Zheng and
Liangsheng Yin and
Zhiqiang Xie and
Jeff Huang and
Chuyue Sun and
Cody Hao Yu and
Shiyi Cao and
Christos Kozyrakis and
Ion Stoica and
Joseph E. Gonzalez and
Clark W. Barrett and
Ying Sheng
| 2,023 | null |
https://doi.org/10.48550/arXiv.2312.07104
|
10.48550/ARXIV.2312.07104
|
CoRR
|
Efficiently Programming Large Language Models using SGLang
|
Efficiently Programming Large Language Models using SGLang
|
https://arxiv.org/html/2312.07104v1
|
SGLang simplifies the writing of LLM programs and boosts execution efficiency. Our experiments demonstrate that SGLang can speed up common LLM tasks by up to 5
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
cacheblend
|
\cite{cacheblend}
|
CacheBlend: Fast Large Language Model Serving for RAG with Cached
Knowledge Fusion
|
http://arxiv.org/abs/2405.16444v3
|
Large language models (LLMs) often incorporate multiple text chunks in their
inputs to provide the necessary contexts. To speed up the prefill of the long
LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache
when the context is reused as the prefix of another LLM input. However, the
reused text chunks are not always the input prefix, which makes precomputed KV
caches not directly usable since they ignore the text's cross-attention with
the preceding texts. Thus, the benefits of reusing KV caches remain largely
unrealized.
This paper tackles just one challenge: when an LLM input contains multiple
text chunks, how to quickly combine their precomputed KV caches in order to
achieve the same generation quality as the expensive full prefill (i.e.,
without reusing KV cache)? This challenge naturally arises in
retrieval-augmented generation (RAG) where the input is supplemented with
multiple retrieved texts as the context. We present CacheBlend, a scheme that
reuses the precomputed KV caches, regardless prefix or not, and selectively
recomputes the KV values of a small subset of tokens to partially update each
reused KV cache. In the meantime, the small extra delay for recomputing some
tokens can be pipelined with the retrieval of KV caches within the same job,
allowing CacheBlend to store KV caches in slower devices with more storage
capacity while retrieving them without increasing the inference delay. By
comparing CacheBlend with the state-of-the-art KV cache reusing schemes on
three open-source LLMs of various sizes and four popular benchmark datasets of
different tasks, we show that CacheBlend reduces time-to-first-token (TTFT) by
2.2-3.3x and increases the inference throughput by 2.8-5x from full KV
recompute without compromising generation quality. The code is available at
https://github.com/LMCache/LMCache.
| true | true |
Jiayi Yao and
Hanchen Li and
Yuhan Liu and
Siddhant Ray and
Yihua Cheng and
Qizheng Zhang and
Kuntai Du and
Shan Lu and
Junchen Jiang
| 2,024 | null |
https://doi.org/10.48550/arXiv.2405.16444
|
10.48550/ARXIV.2405.16444
|
CoRR
|
CacheBlend: Fast Large Language Model Serving for RAG with Cached
Knowledge Fusion
|
CacheBlend: Fast Large Language Model Serving for RAG ... - arXiv
|
https://arxiv.org/abs/2405.16444
|
Image 4: arxiv logo>cs> arXiv:2405.16444 View a PDF of the paper titled CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion, by Jiayi Yao and 8 other authors View a PDF of the paper titled CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion, by Jiayi Yao and 8 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
openaiapi
|
\cite{openaiapi}
|
OpenAI developer platform
| null | null | true | false |
OpenAI
| null | null | null | null | null |
OpenAI developer platform
|
Introducing Verdi, an AI dev platform powered by GPT-4o - OpenAI
|
https://openai.com/index/mercado-libre/
|
Verdi, a development platform layer using GPT-4o, GPT-4o mini, and GPT-3.5 Turbo, which is transforming how Mercado Libre handles customer service and other
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
genimiapi
|
\cite{genimiapi}
|
Gemini API
| null | null | true | false |
Google
| 2,025 | null | null | null | null |
Gemini API
|
Gemini Developer API | Gemma open models | Google AI for ...
|
https://ai.google.dev/
|
Gemini Developer API | Gemma open models | Google AI for Developers - Gemini Showcase - Gemini Showcase ### Integrate Google AI models with an API key Build with cutting-edge AI models, like Gemini, Imagen, and Veo, from Google DeepMind Integrate Google AI models with an API key Unlock AI capabilities for your apps with a simple call to the Gemini API. Integrate AI models like Gemini Nano into web apps with Chrome's built-in web platform APIs. Build trusted and secure AI with guidance for responsible design, development, and deployment of models and applications. See how the Ruby-based AI agent framework empowers developer teams to be more productive with the power of Gemini models.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
claudeapi
|
\cite{claudeapi}
|
Claude API
| null | null | true | false |
Anthropic
| 2,025 | null | null | null | null |
Claude API
|
Anthropic API
|
https://docs.anthropic.com/en/home
|
Home - Anthropic Claude Documentation Learn how to get started with the Anthropic API, the Console, and Claude Code. Explore the advanced features and capabilities now available in Claude.## API reference Integrate and scale using our API and SDKs.## Anthropic Console Learn about changes and new features in Claude and the API.## Upgrade to Claude 4 Upgrade to the latest model to access new tools and features available in Claude 4. ## Claude Code ## Claude Code quickstart Get started with Claude Code.## Claude Code reference Consult the Claude Code reference documentation for details on feature implementation and configuration.## Claude Code release notes Learn about changes and new features in Claude Code. See replicable code samples and implementations.## Anthropic Quickstarts
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
mooncake
|
\cite{mooncake}
|
Mooncake Trace
| null | null | true | false | null | 2,025 | null | null | null | null |
Mooncake Trace
|
kvcache-ai/Mooncake - GitHub
|
https://github.com/kvcache-ai/Mooncake
|
Moonshot AI. Now both the Transfer Engine and Mooncake Store are open-sourced! This repository also hosts its technical report and the open sourced traces.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
hu2024epic
|
\cite{hu2024epic}
|
EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models
| null | null | true | false |
Junhao Hu and Wenrui Huang and Haoyi Wang and Weidong Wang and Tiancheng Hu and Qin Zhang and Hao Feng and Xusheng Chen and Yizhou Shan and Tao Xie
| 2,024 | null |
https://arxiv.org/abs/2410.15332
| null | null |
EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models
|
EPIC: Efficient Position-Independent Caching for Serving Large...
|
https://openreview.net/forum?id=qjd3ZUiHRT&referrer=%5Bthe%20profile%20of%20Yizhou%20Shan%5D(%2Fprofile%3Fid%3D~Yizhou_Shan2)
|
Summary: This paper introduces PICI, an efficient position-independent context caching system for serving large language models. The system pre-computes the KV
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
streamingllm
|
\cite{streamingllm}
|
Efficient Streaming Language Models with Attention Sinks
|
http://arxiv.org/abs/2309.17453v4
|
Deploying Large Language Models (LLMs) in streaming applications such as
multi-round dialogue, where long interactions are expected, is urgently needed
but poses two major challenges. Firstly, during the decoding stage, caching
previous tokens' Key and Value states (KV) consumes extensive memory. Secondly,
popular LLMs cannot generalize to longer texts than the training sequence
length. Window attention, where only the most recent KVs are cached, is a
natural approach -- but we show that it fails when the text length surpasses
the cache size. We observe an interesting phenomenon, namely attention sink,
that keeping the KV of initial tokens will largely recover the performance of
window attention. In this paper, we first demonstrate that the emergence of
attention sink is due to the strong attention scores towards initial tokens as
a "sink" even if they are not semantically important. Based on the above
analysis, we introduce StreamingLLM, an efficient framework that enables LLMs
trained with a finite length attention window to generalize to infinite
sequence lengths without any fine-tuning. We show that StreamingLLM can enable
Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language
modeling with up to 4 million tokens and more. In addition, we discover that
adding a placeholder token as a dedicated attention sink during pre-training
can further improve streaming deployment. In streaming settings, StreamingLLM
outperforms the sliding window recomputation baseline by up to 22.2x speedup.
Code and datasets are provided at https://github.com/mit-han-lab/streaming-llm.
| true | true |
Guangxuan Xiao and
Yuandong Tian and
Beidi Chen and
Song Han and
Mike Lewis
| 2,024 | null |
https://openreview.net/forum?id=NG7sS51zVF
| null | null |
Efficient Streaming Language Models with Attention Sinks
|
Efficient Streaming Language Models with Attention Sinks
|
http://arxiv.org/pdf/2309.17453v4
|
Deploying Large Language Models (LLMs) in streaming applications such as
multi-round dialogue, where long interactions are expected, is urgently needed
but poses two major challenges. Firstly, during the decoding stage, caching
previous tokens' Key and Value states (KV) consumes extensive memory. Secondly,
popular LLMs cannot generalize to longer texts than the training sequence
length. Window attention, where only the most recent KVs are cached, is a
natural approach -- but we show that it fails when the text length surpasses
the cache size. We observe an interesting phenomenon, namely attention sink,
that keeping the KV of initial tokens will largely recover the performance of
window attention. In this paper, we first demonstrate that the emergence of
attention sink is due to the strong attention scores towards initial tokens as
a "sink" even if they are not semantically important. Based on the above
analysis, we introduce StreamingLLM, an efficient framework that enables LLMs
trained with a finite length attention window to generalize to infinite
sequence lengths without any fine-tuning. We show that StreamingLLM can enable
Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language
modeling with up to 4 million tokens and more. In addition, we discover that
adding a placeholder token as a dedicated attention sink during pre-training
can further improve streaming deployment. In streaming settings, StreamingLLM
outperforms the sliding window recomputation baseline by up to 22.2x speedup.
Code and datasets are provided at https://github.com/mit-han-lab/streaming-llm.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
h2o
|
\cite{h2o}
|
{H2O:} Heavy-Hitter Oracle for Efficient Generative Inference of Large
Language Models
| null | null | true | false |
Zhenyu Zhang and
Ying Sheng and
Tianyi Zhou and
Tianlong Chen and
Lianmin Zheng and
Ruisi Cai and
Zhao Song and
Yuandong Tian and
Christopher R{\'{e}} and
Clark W. Barrett and
Zhangyang Wang and
Beidi Chen
| 2,023 | null |
http://papers.nips.cc/paper\_files/paper/2023/hash/6ceefa7b15572587b78ecfcebb2827f8-Abstract-Conference.html
| null | null |
{H2O:} Heavy-Hitter Oracle for Efficient Generative Inference of Large
Language Models
|
Hogwild! Inference: Parallel LLM Generation via Concurrent Attention
|
https://arxiv.org/html/2504.06261v1
|
H2o: Heavy-hitter oracle for efficient generative inference of large language models. Advances in Neural Information Processing Systems, 36
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
infinigen
|
\cite{infinigen}
|
InfiniGen: Efficient Generative Inference of Large Language Models with
Dynamic KV Cache Management
|
http://arxiv.org/abs/2406.19707v1
|
Transformer-based large language models (LLMs) demonstrate impressive
performance across various natural language processing tasks. Serving LLM
inference for generating long contents, however, poses a challenge due to the
enormous memory footprint of the transient state, known as the key-value (KV)
cache, which scales with the sequence length and batch size. In this paper, we
present InfiniGen, a novel KV cache management framework tailored for long-text
generation, which synergistically works with modern offloading-based inference
systems. InfiniGen leverages the key insight that a few important tokens that
are essential for computing the subsequent attention layer in the Transformer
can be speculated by performing a minimal rehearsal with the inputs of the
current layer and part of the query weight and key cache of the subsequent
layer. This allows us to prefetch only the essential KV cache entries (without
fetching them all), thereby mitigating the fetch overhead from the host memory
in offloading-based LLM serving systems. Our evaluation on several
representative LLMs shows that InfiniGen improves the overall performance of a
modern offloading-based system by up to 3.00x compared to prior KV cache
management methods while offering substantially better model accuracy.
| true | true |
Wonbeom Lee and
Jungi Lee and
Junghwan Seo and
Jaewoong Sim
| 2,024 | null |
https://www.usenix.org/conference/osdi24/presentation/lee
| null | null |
InfiniGen: Efficient Generative Inference of Large Language Models with
Dynamic KV Cache Management
|
InfiniGen: Efficient Generative Inference of Large Language Models ...
|
https://arxiv.org/abs/2406.19707
|
In this paper, we present InfiniGen, a novel KV cache management framework tailored for long-text generation, which synergistically works with modern
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
pyramidkv
|
\cite{pyramidkv}
|
PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information
Funneling
|
http://arxiv.org/abs/2406.02069v4
|
In this study, we investigate whether attention-based information flow inside
large language models (LLMs) is aggregated through noticeable patterns for long
context processing. Our observations reveal that LLMs aggregate information
through Pyramidal Information Funneling where attention is scattering widely in
lower layers, progressively consolidating within specific contexts, and
ultimately focusing on critical tokens (a.k.a massive activation or attention
sink) in higher layers. Motivated by these insights, we developed PyramidKV, a
novel and effective KV cache compression method. This approach dynamically
adjusts the KV cache size across different layers, allocating more cache in
lower layers and less in higher ones, diverging from traditional methods that
maintain a uniform KV cache size. Our experimental evaluations, utilizing the
LongBench benchmark, show that PyramidKV matches the performance of models with
a full KV cache while retaining only 12% of the KV cache, thus significantly
reducing memory usage. In scenarios emphasizing memory efficiency, where only
0.7% of the KV cache is maintained, PyramidKV surpasses other KV cache
compression techniques, achieving up to a 20.5 absolute accuracy improvement on
TREC dataset. In the Needle-in-a-Haystack experiment, PyramidKV outperforms
competing methods in maintaining long-context comprehension in LLMs; notably,
retaining just 128 KV cache entries enables the LLAMA-3-70B model to achieve
100.0 Acc. performance.
| true | true |
Zefan Cai and
Yichi Zhang and
Bofei Gao and
Yuliang Liu and
Tianyu Liu and
Keming Lu and
Wayne Xiong and
Yue Dong and
Baobao Chang and
Junjie Hu and
Wen Xiao
| 2,024 | null |
https://doi.org/10.48550/arXiv.2406.02069
|
10.48550/ARXIV.2406.02069
|
CoRR
|
PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information
Funneling
|
PyramidKV: Dynamic KV Cache Compression based on Pyramidal...
|
https://openreview.net/forum?id=jZVNmDiU86
|
We developed PyramidKV, a novel and effective KV cache compression method. This approach dynamically adjusts the KV cache size across different layers.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
KVQuant
|
\cite{KVQuant}
|
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache
Quantization
|
http://arxiv.org/abs/2401.18079v6
|
LLMs are seeing growing use for applications which require large context
windows, and with these large context windows KV cache activations surface as
the dominant contributor to memory consumption during inference. Quantization
is a promising approach for compressing KV cache activations; however, existing
solutions fail to represent activations accurately in sub-4-bit precision. Our
work, KVQuant, facilitates low precision KV cache quantization by incorporating
several novel methods: (i) Per-Channel Key Quantization, where we adjust the
dimension along which we quantize the Key activations to better match the
distribution; (ii) Pre-RoPE Key Quantization, where we quantize Key activations
before the rotary positional embedding to mitigate its impact on quantization;
(iii) Non-Uniform KV Cache Quantization, where we derive per-layer
sensitivity-weighted non-uniform datatypes that better represent the
distributions; and (iv) Per-Vector Dense-and-Sparse Quantization, where we
isolate outliers separately for each vector to minimize skews in quantization
ranges. By applying our method to the LLaMA, Llama-2, Llama-3, and Mistral
models, we achieve < 0.1 perplexity degradation with 3-bit quantization on both
Wikitext-2 and C4, outperforming existing approaches. Our method enables
serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB
GPU and up to 10 million on an 8-GPU system. We develop custom CUDA kernels for
KVQuant, showing that we can achieve up to ~1.7x speedups, compared to baseline
fp16 matrix-vector multiplications, for the LLaMA-7B model.
| true | true |
Coleman Hooper and
Sehoon Kim and
Hiva Mohammadzadeh and
Michael W. Mahoney and
Yakun Sophia Shao and
Kurt Keutzer and
Amir Gholami
| 2,024 | null |
http://papers.nips.cc/paper\_files/paper/2024/hash/028fcbcf85435d39a40c4d61b42c99a4-Abstract-Conference.html
| null | null |
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache
Quantization
|
KVQuant: Towards 10 Million Context Length LLM Inference with KV ...
|
https://github.com/SqueezeAILab/KVQuant
|
GitHub - SqueezeAILab/KVQuant: [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization [Paper] KVQuant is a methodology for efficient KV cache quantization that incorporates several innovations to acheive accurate low-precision quantization, thereby enabling efficient long context length inference. TLDR: KVQuant addresses the memory bottleneck with long context length inference by quantizing the KV cache to low precision. title={KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization}, [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
lruk
|
\cite{lruk}
|
The {LRU-K} Page Replacement Algorithm For Database Disk Buffering
| null | null | true | false |
Elizabeth J. O'Neil and
Patrick E. O'Neil and
Gerhard Weikum
| 1,993 | null |
https://doi.org/10.1145/170035.170081
|
10.1145/170035.170081
| null |
The {LRU-K} Page Replacement Algorithm For Database Disk Buffering
|
[PDF] The LRU-K Page Replacement Algorithm For Database Disk Buffering
|
https://www.cs.cmu.edu/~natassa/courses/15-721/papers/p297-o_neil.pdf
|
The basic idea of. LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statis- tieall y estimate
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
slru
|
\cite{slru}
|
Caching Strategies to Improve Disk System Performance
| null | null | true | false |
Ramakrishna Karedla and
J. Spencer Love and
Bradley G. Wherry
| 1,994 | null |
https://doi.org/10.1109/2.268884
|
10.1109/2.268884
|
Computer
|
Caching Strategies to Improve Disk System Performance
|
Caching strategies to improve disk system performance - IEEE Xplore
|
http://ieeexplore.ieee.org/document/268884/
|
In this article, we examine the use of caching as a means to increase system response time and improve the data throughput of the disk subsystem.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
twoq
|
\cite{twoq}
|
2Q: {A} Low Overhead High Performance Buffer Management Replacement
Algorithm
| null | null | true | false |
Theodore Johnson and
Dennis E. Shasha
| 1,994 | null |
http://www.vldb.org/conf/1994/P439.PDF
| null | null |
2Q: {A} Low Overhead High Performance Buffer Management Replacement
Algorithm
|
2Q: A Low Overhead High Performance Buffer Management ...
|
https://dl.acm.org/doi/10.5555/645920.672996
|
2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm. Authors: Theodore Johnson.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
eelru
|
\cite{eelru}
|
{EELRU:} Simple and Effective Adaptive Page Replacement
| null | null | true | false |
Yannis Smaragdakis and
Scott F. Kaplan and
Paul R. Wilson
| 1,999 | null |
https://doi.org/10.1145/301453.301486
|
10.1145/301453.301486
| null |
{EELRU:} Simple and Effective Adaptive Page Replacement
|
EELRU: Simple and Effective Adaptive Page Replacement
|
https://www.researchgate.net/publication/2822757_EELRU_Simple_and_Effective_Adaptive_Page_Replacement
|
EELRU is a simple adaptive replacement algorithm, which uses only the kind of information needed by LRU---how recently each page has been touched relative to
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
lrfu
|
\cite{lrfu}
|
{LRFU:} {A} Spectrum of Policies that Subsumes the Least Recently
Used and Least Frequently Used Policies
| null | null | true | false |
Donghee Lee and
Jongmoo Choi and
Jong{-}Hun Kim and
Sam H. Noh and
Sang Lyul Min and
Yookun Cho and
Chong{-}Sang Kim
| 2,001 | null |
https://doi.org/10.1109/TC.2001.970573
|
10.1109/TC.2001.970573
|
{IEEE} Trans. Computers
|
{LRFU:} {A} Spectrum of Policies that Subsumes the Least Recently
Used and Least Frequently Used Policies
|
[PDF] LRFU: a spectrum of policies that subsumes the least recently used ...
|
https://www.openu.ac.il/home/wiseman/2os/lru/lrfu.pdf
|
Of these, the Least Recently Used (LRU) and the. Least Frequently Used (LFU) block replacement policies constitute the two main streams. The LRU policy and its.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
lirs
|
\cite{lirs}
|
{LIRS:} an efficient low inter-reference recency set replacement policy
to improve buffer cache performance
| null | null | true | false |
Song Jiang and
Xiaodong Zhang
| 2,002 | null |
https://doi.org/10.1145/511334.511340
|
10.1145/511334.511340
| null |
{LIRS:} an efficient low inter-reference recency set replacement policy
to improve buffer cache performance
|
LIRS: an efficient low inter-reference recency set replacement policy ...
|
https://www.researchgate.net/publication/367088056_LIRS_an_efficient_low_inter-reference_recency_set_replacement_policy_to_improve_buffer_cache_performance
|
Many studies are focused on cache replacement algorithms, such as FIFO, LRU, LFU, and some advanced cache algorithms like ARC [19], LIRS [15] and 2Q [16].
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
arc
|
\cite{arc}
|
{ARC:} {A} Self-Tuning, Low Overhead Replacement Cache
| null | null | true | false |
Nimrod Megiddo and
Dharmendra S. Modha
| 2,003 | null |
http://www.usenix.org/events/fast03/tech/megiddo.html
| null | null |
{ARC:} {A} Self-Tuning, Low Overhead Replacement Cache
|
[PDF] ARC: A Self-Tuning, Low Overhead Replacement Cache
|
https://www.cs.cmu.edu/~natassa/courses/15-721/papers/arcfast.pdf
|
We propose a new cache management policy, namely, Adaptive. Replacement Cache (ARC), that has several advantages. In response to evolving and changing access
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
mq
|
\cite{mq}
|
Second-Level Buffer Cache Management
| null | null | true | false |
Yuanyuan Zhou and
Zhifeng Chen and
Kai Li
| 2,004 | null |
https://doi.org/10.1109/TPDS.2004.13
|
10.1109/TPDS.2004.13
|
{IEEE} Trans. Parallel Distributed Syst.
|
Second-Level Buffer Cache Management
|
[PDF] Second-Level Buffer Cache Management
|
https://www.openu.ac.il/home/wiseman/2os/lru/mq.pdf
|
This is a local cache replacement algorithm because it manages an L2 buffer cache without any information from first-level.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
car
|
\cite{car}
|
{CAR:} Clock with Adaptive Replacement
| null | null | true | false |
Sorav Bansal and
Dharmendra S. Modha
| 2,004 | null |
http://www.usenix.org/events/fast04/tech/bansal.html
| null | null |
{CAR:} Clock with Adaptive Replacement
|
CAR: Clock with Adaptive Replacement - Stanford CS Theory
|
http://theory.stanford.edu/~sbansal/pubs/fast04.pdf
|
by S Bansal · Cited by 412 — CAR is a new algorithm that improves upon CLOCK by being scan-resistant, self-tuning, and adaptively capturing recency and frequency features.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
clockpro
|
\cite{clockpro}
|
CLOCK-Pro: An Effective Improvement of the {CLOCK} Replacement
| null | null | true | false |
Song Jiang and
Feng Chen and
Xiaodong Zhang
| 2,005 | null |
http://www.usenix.org/events/usenix05/tech/general/jiang.html
| null | null |
CLOCK-Pro: An Effective Improvement of the {CLOCK} Replacement
|
CLOCK-Pro: An Effective Improvement of the CLOCK Replacement
|
https://www.usenix.org/conference/2005-usenix-annual-technical-conference/clock-pro-effective-improvement-clock-replacement
|
We propose an improved CLOCK replacement policy, called CLOCK-Pro. By additionally keeping track of a limited number of replaced pages, CLOCK-Pro works in a
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
DBLP:journals/tos/EinzigerEFM22
|
\cite{DBLP:journals/tos/EinzigerEFM22}
|
Lightweight Robust Size Aware Cache Management
|
http://arxiv.org/abs/2105.08770v2
|
Modern key-value stores, object stores, Internet proxy caches, as well as
Content Delivery Networks (CDN) often manage objects of diverse sizes, e.g.,
blobs, video files of different lengths, images with varying resolution, and
small documents. In such workloads, size-aware cache policies outperform
size-oblivious algorithms. Unfortunately, existing size-aware algorithms tend
to be overly complicated and computationally~expensive.
Our work follows a more approachable pattern; we extend the prevalent
(size-oblivious) TinyLFU cache admission policy to handle variable sized items.
Implementing our approach inside two popular caching libraries only requires
minor changes. We show that our algorithms yield competitive or better
hit-ratios and byte hit-ratios compared to the state of the art size-aware
algorithms such as AdaptSize, LHD, LRB, and GDSF. Further, a runtime comparison
indicates that our implementation is faster by up to x3 compared to the best
alternative, i.e., it imposes much lower CPU overhead.
| true | true |
Gil Einziger and
Ohad Eytan and
Roy Friedman and
Benjamin Manes
| 2,022 | null |
https://doi.org/10.1145/3507920
|
10.1145/3507920
|
{ACM} Trans. Storage
|
Lightweight Robust Size Aware Cache Management
|
Lightweight Robust Size Aware Cache Management
|
http://arxiv.org/pdf/2105.08770v2
|
Modern key-value stores, object stores, Internet proxy caches, as well as
Content Delivery Networks (CDN) often manage objects of diverse sizes, e.g.,
blobs, video files of different lengths, images with varying resolution, and
small documents. In such workloads, size-aware cache policies outperform
size-oblivious algorithms. Unfortunately, existing size-aware algorithms tend
to be overly complicated and computationally~expensive.
Our work follows a more approachable pattern; we extend the prevalent
(size-oblivious) TinyLFU cache admission policy to handle variable sized items.
Implementing our approach inside two popular caching libraries only requires
minor changes. We show that our algorithms yield competitive or better
hit-ratios and byte hit-ratios compared to the state of the art size-aware
algorithms such as AdaptSize, LHD, LRB, and GDSF. Further, a runtime comparison
indicates that our implementation is faster by up to x3 compared to the best
alternative, i.e., it imposes much lower CPU overhead.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
lhd
|
\cite{lhd}
|
{LHD:} Improving Cache Hit Rate by Maximizing Hit Density
| null | null | true | false |
Nathan Beckmann and
Haoxian Chen and
Asaf Cidon
| 2,018 | null |
https://www.usenix.org/conference/nsdi18/presentation/beckmann
| null | null |
{LHD:} Improving Cache Hit Rate by Maximizing Hit Density
|
LHD: improving cache hit rate by maximizing hit density
|
https://dl.acm.org/doi/10.5555/3307441.3307475
|
We introduce least hit density (LHD), a novel eviction policy for key-value caches. LHD predicts each object's expected hits-per-space-consumed (hit density).
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
cacheus
|
\cite{cacheus}
|
Learning Cache Replacement with {CACHEUS}
| null | null | true | false |
Liana V. Rodriguez and
Farzana Beente Yusuf and
Steven Lyons and
Eysler Paz and
Raju Rangaswami and
Jason Liu and
Ming Zhao and
Giri Narasimhan
| 2,021 | null |
https://www.usenix.org/conference/fast21/presentation/rodriguez
| null | null |
Learning Cache Replacement with {CACHEUS}
|
Learning Cache Replacement with Cacheus
|
https://www.usenix.org/system/files/fast21-rodriguez.pdf
|
by LV Rodriguez · 2021 · Cited by 125 — Furthermore, CACHEUS enables augmenting state-of-the-art algorithms (e.g., LIRS, ARC) by combining it with a complementary cache replacement
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
sieve
|
\cite{sieve}
|
{SIEVE} is Simpler than {LRU:} an Efficient Turn-Key Eviction Algorithm
for Web Caches
| null | null | true | false |
Yazhuo Zhang and
Juncheng Yang and
Yao Yue and
Ymir Vigfusson and
K. V. Rashmi
| 2,024 | null |
https://www.usenix.org/conference/nsdi24/presentation/zhang-yazhuo
| null | null |
{SIEVE} is Simpler than {LRU:} an Efficient Turn-Key Eviction Algorithm
for Web Caches
|
SIEVE - An Efficient Turn-Key Eviction Algorithm for Web Caches
|
https://www.classcentral.com/course/youtube-nsdi-24-sieve-is-simpler-than-lru-an-efficient-turn-key-eviction-algorithm-for-web-caches-294624
|
Discover how SIEVE outperforms traditional algorithms like LRU in simplicity, efficiency, and scalability for web cache workloads. Learn about the algorithm's
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
cherkasova1998improving
|
\cite{cherkasova1998improving}
|
Improving WWW proxies performance with greedy-dual-size-frequency caching policy
| null | null | true | false |
Cherkasova, Ludmila
| 1,998 | null | null | null | null |
Improving WWW proxies performance with greedy-dual-size-frequency caching policy
|
Improving WWW proxies performance with Greedy-Dual- ...
|
https://www.researchgate.net/publication/228542715_Improving_WWW_proxies_performance_with_Greedy-Dual-Size-Frequency_caching_policy
|
This paper introduces the Greedy-Dual-Size-Frequency caching policy to maximize hit and byte hit rates for WWW proxies. Proposed caching strategy incorporates
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
yang2020twemcache
|
\cite{yang2020twemcache}
|
A large scale analysis of hundreds of in-memory cache clusters at Twitter
| null | null | true | false |
Juncheng Yang and Yao Yue and K. V. Rashmi
| 2,020 | null |
https://www.usenix.org/conference/osdi20/presentation/yang
| null | null |
A large scale analysis of hundreds of in-memory cache clusters at Twitter
|
[PDF] A large scale analysis of hundreds of in-memory cache clusters at ...
|
https://www.usenix.org/system/files/osdi20-yang.pdf
|
This paper is included in the Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation November 4–6, 2020 978-1-939133-19-9 Open access to the Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX A large scale analysis of hundreds of in-memory cache clusters at Twitter Juncheng Yang, Carnegie Mellon University; Yao Yue, Twitter; K. When memory is full, 192 14th USENIX Symposium on Operating Systems Design and Implementation USENIX Association #cluster request rate cache size cpu cores 0.00 0.25 0.50 0.75 1.00 Fraction of use case storage computation transient Figure 2: Resources consumed for the three cache use cases.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
berg2020cachelib
|
\cite{berg2020cachelib}
|
The {CacheLib} Caching Engine: Design and Experiences at Scale
| null | null | true | false |
Benjamin Berg and Daniel S. Berger and Sara McAllister and Isaac Grosof and Sathya Gunasekar and Jimmy Lu and Michael Uhlar and Jim Carrig and Nathan Beckmann and Mor Harchol-Balter and Gregory R. Ganger
| 2,020 | null |
https://www.usenix.org/conference/osdi20/presentation/berg
| null | null |
The {CacheLib} Caching Engine: Design and Experiences at Scale
|
The CacheLib Caching Engine: Design and Experiences at Scale
|
https://www.usenix.org/conference/osdi20/presentation/berg
|
CacheLib is a general-purpose caching engine, designed based on experiences with a range of caching use cases at Facebook, that facilitates the easy
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
icebreaker
|
\cite{icebreaker}
|
IceBreaker: warming serverless functions better with heterogeneity
| null | null | true | false |
Rohan Basu Roy and
Tirthak Patel and
Devesh Tiwari
| 2,022 | null |
https://doi.org/10.1145/3503222.3507750
|
10.1145/3503222.3507750
| null |
IceBreaker: warming serverless functions better with heterogeneity
|
[PDF] IceBreaker: Warming Serverless Functions Better with Heterogeneity
|
http://www1.ece.neu.edu/~ningfang/SimPaper/icebreaker-ASPLOS22.pdf
|
IceBreaker is a novel function pre-warming and keep-alive scheme for serverless functions that exploit server-heterogeneity to lower the keep-alive cost and
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
fasscache
|
\cite{fasscache}
|
FaasCache: keeping serverless computing alive with greedy-dual caching
| null | null | true | false |
Alexander Fuerst and
Prateek Sharma
| 2,021 | null |
https://doi.org/10.1145/3445814.3446757
|
10.1145/3445814.3446757
| null |
FaasCache: keeping serverless computing alive with greedy-dual caching
|
[PDF] FaasCache: Keeping Serverless Computing Alive with Greedy-Dual ...
|
https://afuerst.github.io/assets/FaasCache.pdf
|
Keep-alive policies must keep functions alive based on their resource and usage characteristics, which is challenging due to the diversity in FaaS workloads.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
DBLP:conf/osdi/ZhongLCHZL0024
|
\cite{DBLP:conf/osdi/ZhongLCHZL0024}
|
DistServe: Disaggregating Prefill and Decoding for Goodput-optimized
Large Language Model Serving
|
http://arxiv.org/abs/2401.09670v3
|
DistServe improves the performance of large language models (LLMs) serving by
disaggregating the prefill and decoding computation. Existing LLM serving
systems colocate the two phases and batch the computation of prefill and
decoding across all users and requests. We find that this strategy not only
leads to strong prefill-decoding interferences but also couples the resource
allocation and parallelism plans for both phases. LLM applications often
emphasize individual latency for each phase: time to first token (TTFT) for the
prefill phase and time per output token (TPOT) of each request for the decoding
phase. In the presence of stringent latency requirements, existing systems have
to prioritize one latency over the other, or over-provision compute resources
to meet both.
DistServe assigns prefill and decoding computation to different GPUs, hence
eliminating prefill-decoding interferences. Given the application's TTFT and
TPOT requirements, DistServe co-optimizes the resource allocation and
parallelism strategy tailored for each phase. DistServe also places the two
phases according to the serving cluster's bandwidth to minimize the
communication caused by disaggregation. As a result, DistServe significantly
improves LLM serving performance in terms of the maximum rate that can be
served within both TTFT and TPOT constraints on each GPU. Our evaluations show
that on various popular LLMs, applications, and latency requirements, DistServe
can serve 7.4x more requests or 12.6x tighter SLO, compared to state-of-the-art
systems, while staying within latency constraints for > 90% of requests.
| true | true |
Yinmin Zhong and
Shengyu Liu and
Junda Chen and
Jianbo Hu and
Yibo Zhu and
Xuanzhe Liu and
Xin Jin and
Hao Zhang
| 2,024 | null |
https://www.usenix.org/conference/osdi24/presentation/zhong-yinmin
| null | null |
DistServe: Disaggregating Prefill and Decoding for Goodput-optimized
Large Language Model Serving
|
[PDF] DistServe: Disaggregating Prefill and Decoding for Goodput ...
|
https://www.usenix.org/system/files/osdi24-zhong-yinmin.pdf
|
July 10–12, 2024 • Santa Clara, CA, USA 978-1-939133-40-3 Open access to the Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation is sponsored by DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving Yinmin Zhong and Shengyu Liu, Peking University; Junda Chen, UC San Diego; Jianbo Hu, Peking University; Yibo Zhu, StepFun; Xuanzhe Liu and Xin Jin, Peking University; Hao Zhang, UC San Diego DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving Yinmin Zhong1 Shengyu Liu1 Junda Chen3 Jianbo Hu1 Yibo Zhu2 Xuanzhe Liu1 Xin Jin1 Hao Zhang3 1School of Computer Science, Peking University 2StepFun 3UC San Diego Abstract DistServe improves the performance of large language mod-els (LLMs) serving by disaggregating the prefill and decoding computation.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
DBLP:journals/corr/abs-2404-09526
|
\cite{DBLP:journals/corr/abs-2404-09526}
|
LoongServe: Efficiently Serving Long-Context Large Language Models with
Elastic Sequence Parallelism
|
http://arxiv.org/abs/2404.09526v2
|
The context window of large language models (LLMs) is rapidly increasing,
leading to a huge variance in resource usage between different requests as well
as between different phases of the same request. Restricted by static
parallelism strategies, existing LLM serving systems cannot efficiently utilize
the underlying resources to serve variable-length requests in different phases.
To address this problem, we propose a new parallelism paradigm, elastic
sequence parallelism (ESP), to elastically adapt to the variance between
different requests and phases. Based on ESP, we design and build LoongServe, an
LLM serving system that (1) improves computation efficiency by elastically
adjusting the degree of parallelism in real-time, (2) improves communication
efficiency by reducing key-value cache migration overhead and overlapping
partial decoding communication with computation, and (3) improves GPU memory
efficiency by reducing key-value cache fragmentation across instances. Our
evaluation under diverse real-world datasets shows that LoongServe improves the
maximum throughput by up to 3.85$\times$ compared to the chunked prefill and
5.81$\times$ compared to the prefill-decoding disaggregation.
| true | true |
Bingyang Wu and
Shengyu Liu and
Yinmin Zhong and
Peng Sun and
Xuanzhe Liu and
Xin Jin
| 2,024 | null |
https://doi.org/10.48550/arXiv.2404.09526
|
10.48550/ARXIV.2404.09526
|
CoRR
|
LoongServe: Efficiently Serving Long-Context Large Language Models with
Elastic Sequence Parallelism
|
LoongServe: Efficiently Serving Long-Context Large Language ...
|
https://colab.ws/articles/10.1145%2F3694715.3695948
|
LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism. Bingyang Wu 1. ,. Shengyu Liu 1. ,. Yinmin
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
DBLP:conf/sosp/KwonLZ0ZY0ZS23
|
\cite{DBLP:conf/sosp/KwonLZ0ZY0ZS23}
|
Efficient Memory Management for Large Language Model Serving with
PagedAttention
|
http://arxiv.org/abs/2309.06180v1
|
High throughput serving of large language models (LLMs) requires batching
sufficiently many requests at a time. However, existing systems struggle
because the key-value cache (KV cache) memory for each request is huge and
grows and shrinks dynamically. When managed inefficiently, this memory can be
significantly wasted by fragmentation and redundant duplication, limiting the
batch size. To address this problem, we propose PagedAttention, an attention
algorithm inspired by the classical virtual memory and paging techniques in
operating systems. On top of it, we build vLLM, an LLM serving system that
achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV
cache within and across requests to further reduce memory usage. Our
evaluations show that vLLM improves the throughput of popular LLMs by
2-4$\times$ with the same level of latency compared to the state-of-the-art
systems, such as FasterTransformer and Orca. The improvement is more pronounced
with longer sequences, larger models, and more complex decoding algorithms.
vLLM's source code is publicly available at
https://github.com/vllm-project/vllm
| true | true |
Woosuk Kwon and
Zhuohan Li and
Siyuan Zhuang and
Ying Sheng and
Lianmin Zheng and
Cody Hao Yu and
Joseph Gonzalez and
Hao Zhang and
Ion Stoica
| 2,023 | null |
https://doi.org/10.1145/3600006.3613165
|
10.1145/3600006.3613165
| null |
Efficient Memory Management for Large Language Model Serving with
PagedAttention
|
Efficient Memory Management for Large Language Model ...
|
https://arxiv.org/pdf/2309.06180
|
Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon 1,∗ Zhuohan Li 1,∗ Siyuan Zhuang 1 Ying Sheng 1,2 Lianmin Zheng 1 Cody Hao Yu 3 Joseph E. Gonzalez 1 Hao Zhang 4 Ion Stoica 1 1 UC Berkeley 2Stanford University 3Independent Researcher 4UC San Diego Abstract High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. To address this problem, we propose PagedAttention, an attention al-gorithm inspired by the classical virtual memory and pag-ing techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce mem-ory usage. To address the above limitations, we propose PagedAt-tention , an attention algorithm inspired by the operating system’s (OS) solution to memory fragmentation and shar-ing: virtual memory with paging . In this work, we build vLLM , a high-throughput distributed LLM serving engine on top of PagedAttention that achieves near-zero waste in KV cache memory.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
alpaserve
|
\cite{alpaserve}
|
AlpaServe: Statistical Multiplexing with Model Parallelism for Deep
Learning Serving
|
http://arxiv.org/abs/2302.11665v2
|
Model parallelism is conventionally viewed as a method to scale a single
large deep learning model beyond the memory limits of a single device. In this
paper, we demonstrate that model parallelism can be additionally used for the
statistical multiplexing of multiple devices when serving multiple models, even
when a single model can fit into a single device. Our work reveals a
fundamental trade-off between the overhead introduced by model parallelism and
the opportunity to exploit statistical multiplexing to reduce serving latency
in the presence of bursty workloads. We explore the new trade-off space and
present a novel serving system, AlpaServe, that determines an efficient
strategy for placing and parallelizing collections of large deep learning
models across a distributed cluster. Evaluation results on production workloads
show that AlpaServe can process requests at up to 10x higher rates or 6x more
burstiness while staying within latency constraints for more than 99% of
requests.
| true | true |
Zhuohan Li and Lianmin Zheng and Yinmin Zhong and Vincent Liu and Ying Sheng and Xin Jin and Yanping Huang and Zhifeng Chen and Hao Zhang and Joseph E. Gonzalez and Ion Stoica
| 2,023 | null |
https://www.usenix.org/conference/osdi23/presentation/li-zhouhan
| null | null |
AlpaServe: Statistical Multiplexing with Model Parallelism for Deep
Learning Serving
|
alpa-projects/mms: AlpaServe - GitHub
|
https://github.com/alpa-projects/mms
|
This is the official implementation of our OSDI'23 paper: AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving. To reproduce
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
DBLP:conf/osdi/YuJKKC22
|
\cite{DBLP:conf/osdi/YuJKKC22}
|
Orca: {A} Distributed Serving System for Transformer-Based Generative
Models
| null | null | true | false |
Gyeong{-}In Yu and
Joo Seong Jeong and
Geon{-}Woo Kim and
Soojeong Kim and
Byung{-}Gon Chun
| 2,022 | null |
https://www.usenix.org/conference/osdi22/presentation/yu
| null | null |
Orca: {A} Distributed Serving System for Transformer-Based Generative
Models
|
Orca: A Distributed Serving System for Transformer-Based ... - USENIX
|
https://www.usenix.org/conference/osdi22/presentation/yu
|
We have implemented a distributed serving system called ORCA, with additional designs for scalability to models with hundreds of billions of parameters.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
DBLP:conf/isca/PatelCZSGMB24
|
\cite{DBLP:conf/isca/PatelCZSGMB24}
|
Splitwise: Efficient generative LLM inference using phase splitting
|
http://arxiv.org/abs/2311.18677v2
|
Recent innovations in generative large language models (LLMs) have made their
applications and use-cases ubiquitous. This has led to large-scale deployments
of these models, using complex, expensive, and power-hungry AI accelerators,
most commonly GPUs. These developments make LLM inference efficiency an
important challenge. Based on our extensive characterization, we find that
there are two main phases during an LLM inference request: a compute-intensive
prompt computation, and a memory-intensive token generation, each with distinct
latency, throughput, memory, and power characteristics. Despite
state-of-the-art batching and scheduling, the token generation phase
underutilizes compute resources. Specifically, unlike compute-intensive prompt
computation phases, token generation phases do not require the compute
capability of the latest GPUs, and can be run with lower power and cost.
With Splitwise, we propose splitting the two phases of a LLM inference
request on to separate machines. This allows us to use hardware that is
well-suited for each phase, and provision resources independently per phase.
However, splitting an inference request across machines requires state transfer
from the machine running prompt computation over to the machine generating
tokens. We implement and optimize this state transfer using the fast back-plane
interconnects available in today's GPU clusters.
We use the Splitwise technique to design LLM inference clusters using the
same or different types of machines for the prompt computation and token
generation phases. Our clusters are optimized for three key objectives:
throughput, cost, and power. In particular, we show that we can achieve 1.4x
higher throughput at 20% lower cost than current designs. Alternatively, we can
achieve 2.35x more throughput with the same cost and power budgets.
| true | true |
Pratyush Patel and
Esha Choukse and
Chaojie Zhang and
Aashaka Shah and
{\'{I}}{\~{n}}igo Goiri and
Saeed Maleki and
Ricardo Bianchini
| 2,024 | null |
https://doi.org/10.1109/ISCA59077.2024.00019
|
10.1109/ISCA59077.2024.00019
| null |
Splitwise: Efficient generative LLM inference using phase splitting
|
Splitwise: Efficient generative LLM inference using phase splitting
|
http://arxiv.org/pdf/2311.18677v2
|
Recent innovations in generative large language models (LLMs) have made their
applications and use-cases ubiquitous. This has led to large-scale deployments
of these models, using complex, expensive, and power-hungry AI accelerators,
most commonly GPUs. These developments make LLM inference efficiency an
important challenge. Based on our extensive characterization, we find that
there are two main phases during an LLM inference request: a compute-intensive
prompt computation, and a memory-intensive token generation, each with distinct
latency, throughput, memory, and power characteristics. Despite
state-of-the-art batching and scheduling, the token generation phase
underutilizes compute resources. Specifically, unlike compute-intensive prompt
computation phases, token generation phases do not require the compute
capability of the latest GPUs, and can be run with lower power and cost.
With Splitwise, we propose splitting the two phases of a LLM inference
request on to separate machines. This allows us to use hardware that is
well-suited for each phase, and provision resources independently per phase.
However, splitting an inference request across machines requires state transfer
from the machine running prompt computation over to the machine generating
tokens. We implement and optimize this state transfer using the fast back-plane
interconnects available in today's GPU clusters.
We use the Splitwise technique to design LLM inference clusters using the
same or different types of machines for the prompt computation and token
generation phases. Our clusters are optimized for three key objectives:
throughput, cost, and power. In particular, we show that we can achieve 1.4x
higher throughput at 20% lower cost than current designs. Alternatively, we can
achieve 2.35x more throughput with the same cost and power budgets.
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
298501
|
\cite{298501}
|
{Cost-Efficient} Large Language Model Serving for Multi-turn Conversations with {CachedAttention}
| null | null | true | false |
Bin Gao and Zhuomin He and Puru Sharma and Qingxuan Kang and Djordje Jevdjic and Junbo Deng and Xingkun Yang and Zhou Yu and Pengfei Zuo
| 2,024 | null |
https://www.usenix.org/conference/atc24/presentation/gao-bin-cost
| null | null |
{Cost-Efficient} Large Language Model Serving for Multi-turn Conversations with {CachedAttention}
|
Cost-Efficient Large Language Model Serving for Multi-turn ... - arXiv
|
https://arxiv.org/abs/2403.19708
|
View a PDF of the paper titled Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention, by Bin Gao and 8 other authors To address the problem, this paper proposes CachedAttention, a new attention mechanism that enables reuse of KV caches across multi-turn conversations, significantly reducing the repetitive computation overheads. View a PDF of the paper titled Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention, by Bin Gao and 8 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
DBLP:journals/corr/abs-2412-17246
|
\cite{DBLP:journals/corr/abs-2412-17246}
|
Fast and Live Model Auto Scaling with {O(1)} Host Caching
| null | null | true | false |
Dingyan Zhang and
Haotian Wang and
Yang Liu and
Xingda Wei and
Yizhou Shan and
Rong Chen and
Haibo Chen
| 2,024 | null |
https://doi.org/10.48550/arXiv.2412.17246
|
10.48550/ARXIV.2412.17246
|
CoRR
|
Fast and Live Model Auto Scaling with {O(1)} Host Caching
|
Fast and Live Model Auto Scaling with 𝑂(1) Host Caching
|
https://arxiv.org/html/2412.17246v1
|
Model autoscaling is the key mechanism to achieve serverless model-as-a-service, but it faces a fundamental trade-off between scaling speed and storage/memory
|
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider
|
2506.02634v1
|
shahrad2020serverless
|
\cite{shahrad2020serverless}
|
Serverless in the Wild: Characterizing and Optimizing the Serverless
Workload at a Large Cloud Provider
|
http://arxiv.org/abs/2003.03423v3
|
Function as a Service (FaaS) has been gaining popularity as a way to deploy
computations to serverless backends in the cloud. This paradigm shifts the
complexity of allocating and provisioning resources to the cloud provider,
which has to provide the illusion of always-available resources (i.e., fast
function invocations without cold starts) at the lowest possible resource cost.
Doing so requires the provider to deeply understand the characteristics of the
FaaS workload. Unfortunately, there has been little to no public information on
these characteristics. Thus, in this paper, we first characterize the entire
production FaaS workload of Azure Functions. We show for example that most
functions are invoked very infrequently, but there is an 8-order-of-magnitude
range of invocation frequencies. Using observations from our characterization,
we then propose a practical resource management policy that significantly
reduces the number of function coldstarts,while spending fewerresources than
state-of-the-practice policies.
| true | true |
Mohammad Shahrad and Rodrigo Fonseca and Inigo Goiri and Gohar Chaudhry and Paul Batum and Jason Cooke and Eduardo Laureano and Colby Tresness and Mark Russinovich and Ricardo Bianchini
| 2,020 | null |
https://www.usenix.org/conference/atc20/presentation/shahrad
| null | null |
Serverless in the Wild: Characterizing and Optimizing the Serverless
Workload at a Large Cloud Provider
|
Characterizing and Optimizing the Serverless Workload at ...
|
https://www.usenix.org/system/files/atc20-shahrad.pdf
|
by M Shahrad · 2020 · Cited by 879 — This paper characterizes Azure Functions' serverless workload, showing most functions are invoked infrequently, and proposes a resource
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
liu2024:visual
|
\cite{liu2024:visual}
|
Visual Instruction Tuning
|
http://arxiv.org/abs/2304.08485v2
|
Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
| true | true |
Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae
| 2,024 | null | null | null |
Advances in neural information processing systems
|
Visual Instruction Tuning
|
Visual Instruction Tuning
|
http://arxiv.org/pdf/2304.08485v2
|
Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
bai2023:qwen
|
\cite{bai2023:qwen}
|
Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond
| null | null | true | false |
Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren
| 2,023 | null | null | null | null |
Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond
|
Qwen-VL: A Versatile Vision-Language Model for Understanding...
|
https://openreview.net/forum?id=qrGjFJVl3m
|
Despite the effort in open-sourcing the model and its weights, the reviewers find QWEN-VL lacking in significant research contributions and technical novelty. * _**Open-source:**_ Qwen-VL is an open-sourced large vision-language model that excels in **(i)** achieving leading performance across a wide range of vision-language understanding and generation tasks, **(ii)** offering multi-lingual support, particularly in English and Chinese, **(iii)** accommodating multi-image and high-resolution inputs, and **(iv)** demonstrating fine-grained visual perception abilities, particularly in scene text-oriented visual question-answering and visual grounding. Unlike previous representative vision-language models like PaLI-X, which leverages proprietary in-house data and utilize publicly inaccessible model weights (_e.g._, ViT-22B), along with significantly high training costs, our Qwen-VL's training process is more practical and holds considerable referential significance for future research.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
chen2023:sharegpt4v
|
\cite{chen2023:sharegpt4v}
|
ShareGPT4V: Improving Large Multi-Modal Models with Better Captions
|
http://arxiv.org/abs/2311.12793v2
|
In the realm of large multi-modal models (LMMs), efficient modality alignment
is crucial yet often constrained by the scarcity of high-quality image-text
data. To address this bottleneck, we introduce the ShareGPT4V dataset, a
pioneering large-scale resource featuring 1.2 million highly descriptive
captions, which surpasses existing datasets in diversity and information
content, covering world knowledge, object properties, spatial relationships,
and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated
100K high-quality captions collected from advanced GPT4-Vision and has been
expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V
first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT)
phase, by substituting an equivalent quantity of detailed captions in existing
SFT datasets with a subset of our high-quality captions, significantly
enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME
and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and
2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training
and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple
architecture that has remarkable performance across a majority of the
multi-modal benchmarks. This project is available at
https://ShareGPT4V.github.io to serve as a pivotal resource for advancing the
LMMs community.
| true | true |
Chen, Lin and Li, Jisong and Dong, Xiaoyi and Zhang, Pan and He, Conghui and Wang, Jiaqi and Zhao, Feng and Lin, Dahua
| 2,023 | null | null | null |
arXiv preprint arXiv:2311.12793
|
ShareGPT4V: Improving Large Multi-Modal Models with Better Captions
|
Improving Large Multi-Modal Models with Better Captions - arXiv
|
https://arxiv.org/abs/2311.12793
|
Image 4: arxiv logo>cs> arXiv:2311.12793 arXiv:2311.12793 (cs) View a PDF of the paper titled ShareGPT4V: Improving Large Multi-Modal Models with Better Captions, by Lin Chen and 7 other authors View a PDF of the paper titled ShareGPT4V: Improving Large Multi-Modal Models with Better Captions, by Lin Chen and 7 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
li2023:videochat
|
\cite{li2023:videochat}
|
VideoChat: Chat-Centric Video Understanding
|
http://arxiv.org/abs/2305.06355v2
|
In this paper, we initiate an attempt of developing an end-to-end
chat-centric video understanding system, coined as VideoChat. It integrates
video foundation models and large language models via a learnable neural
interface, excelling in spatiotemporal reasoning, event localization, and
causal relationship inference. To instructively tune this system, we build a
video-centric instruction dataset, composed of thousands of videos associated
with detailed descriptions and conversations. This dataset emphasizes
spatiotemporal reasoning and captures causal relationships, providing a
valuable asset for training our chat-centric video understanding system.
Preliminary qualitative experiments demonstrate the potential of our system
across a broad spectrum of video applications, which could serve as a simple
prototype system for future research on chat-centric video understanding.
Access our code and data at https://github.com/OpenGVLab/Ask-Anything
| true | true |
Li, KunChang and He, Yinan and Wang, Yi and Li, Yizhuo and Wang, Wenhai and Luo, Ping and Wang, Yali and Wang, Limin and Qiao, Yu
| 2,023 | null | null | null |
arXiv preprint arXiv:2305.06355
|
VideoChat: Chat-Centric Video Understanding
|
VideoChat : Chat-Centric Video Understanding
|
https://img.shlab.org.cn/pjlab/files/2023/06/638215855649090000.pdf
|
by KC Li · 2023 · Cited by 853 — VideoChat is an end-to-end chat-centric video understanding system integrating video and large language models, excelling in spatiotemporal reasoning and
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
zhang2023:video
|
\cite{zhang2023:video}
|
Video-llama: An instruction-tuned audio-visual language model for video understanding
| null | null | true | false |
Zhang, Hang and Li, Xin and Bing, Lidong
| 2,023 | null | null | null |
arXiv preprint arXiv:2306.02858
|
Video-llama: An instruction-tuned audio-visual language model for video understanding
|
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio ...
|
https://github.com/DAMO-NLP-SG/Video-LLaMA
|
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding # Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding The following checkpoints are the full weights (visual encoder + audio encoder + Q-Formers + language decoder) to launch Video-LLaMA: Firstly, set the `llama_model` (for the path to the language decoder), `imagebind_ckpt_path` (for the path to the audio encoder), `ckpt` (for the path to VL branch) and `ckpt_2` (for the path to AL branch) in eval\_configs/video\_llama\_eval\_withaudio.yaml accordingly. The training of each cross-modal branch (i.e., VL branch or AL branch) in Video-LLaMA consists of two stages, title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding}, [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
lu2024:unified
|
\cite{lu2024:unified}
|
Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision,
Language, Audio, and Action
|
http://arxiv.org/abs/2312.17172v1
|
We present Unified-IO 2, the first autoregressive multimodal model that is
capable of understanding and generating image, text, audio, and action. To
unify different modalities, we tokenize inputs and outputs -- images, text,
audio, action, bounding boxes, etc., into a shared semantic space and then
process them with a single encoder-decoder transformer model. Since training
with such diverse modalities is challenging, we propose various architectural
improvements to stabilize model training. We train our model from scratch on a
large multimodal pre-training corpus from diverse sources with a multimodal
mixture of denoisers objective. To learn an expansive set of skills, such as
following multimodal instructions, we construct and finetune on an ensemble of
120 datasets with prompts and augmentations. With a single unified model,
Unified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and
strong results in more than 35 benchmarks, including image generation and
understanding, natural language understanding, video and audio understanding,
and robotic manipulation. We release all our models to the research community.
| true | true |
Lu, Jiasen and Clark, Christopher and Lee, Sangho and Zhang, Zichen and Khosla, Savya and Marten, Ryan and Hoiem, Derek and Kembhavi, Aniruddha
| 2,024 | null | null | null | null |
Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision,
Language, Audio, and Action
|
Unified-IO 2: Scaling Autoregressive Multimodal Models with ...
|
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_Unified-IO_2_Scaling_Autoregressive_Multimodal_Models_with_Vision_Language_Audio_CVPR_2024_paper.pdf
|
by J Lu · 2024 · Cited by 210 — UNIFIED-IO 2 is a model that understands and generates image, text, audio, and action, using a single encoder-decoder model.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
achiam2023:gpt
|
\cite{achiam2023:gpt}
|
Gpt-4 technical report
| null | null | true | false |
Achiam, Josh and Adler, Steven and Agarwal, Sandhini and Ahmad, Lama and Akkaya, Ilge and Aleman, Florencia Leoni and Almeida, Diogo and Altenschmidt, Janko and Altman, Sam and Anadkat, Shyamal and others
| 2,023 | null | null | null |
arXiv preprint arXiv:2303.08774
|
Gpt-4 technical report
|
GPT-4 Technical Report
|
http://arxiv.org/pdf/2303.08774v6
|
We report the development of GPT-4, a large-scale, multimodal model which can
accept image and text inputs and produce text outputs. While less capable than
humans in many real-world scenarios, GPT-4 exhibits human-level performance on
various professional and academic benchmarks, including passing a simulated bar
exam with a score around the top 10% of test takers. GPT-4 is a
Transformer-based model pre-trained to predict the next token in a document.
The post-training alignment process results in improved performance on measures
of factuality and adherence to desired behavior. A core component of this
project was developing infrastructure and optimization methods that behave
predictably across a wide range of scales. This allowed us to accurately
predict some aspects of GPT-4's performance based on models trained with no
more than 1/1,000th the compute of GPT-4.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
busso2008:iemocap
|
\cite{busso2008:iemocap}
|
IEMOCAP: Interactive emotional dyadic motion capture database
| null | null | true | false |
Busso, Carlos and Bulut, Murtaza and Lee, Chi-Chun and Kazemzadeh, Abe and Mower, Emily and Kim, Samuel and Chang, Jeannette N and Lee, Sungbok and Narayanan, Shrikanth S
| 2,008 | null | null | null |
Language resources and evaluation
|
IEMOCAP: Interactive emotional dyadic motion capture database
|
IEMOCAP- Home
|
https://sail.usc.edu/iemocap/
|
The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
zadeh2018:multimodal
|
\cite{zadeh2018:multimodal}
|
Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph
| null | null | true | false |
Zadeh, AmirAli Bagher and Liang, Paul Pu and Poria, Soujanya and Cambria, Erik and Morency, Louis-Philippe
| 2,018 | null | null | null | null |
Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph
|
The MOSEI Dataset and Interpretable Dynamic Fusion
|
https://pliang279.github.io/papers/dap2018_mosei.pdf
|
by PP Liang · Cited by 30 — In this paper we introduce CMU-Multimodal Opinion. Sentiment and Emotion Intensity (CMU-. MOSEI), the largest dataset for multimodal sentiment analysis and
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
poria2019:meld
|
\cite{poria2019:meld}
|
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in
Conversations
|
http://arxiv.org/abs/1810.02508v6
|
Emotion recognition in conversations is a challenging task that has recently
gained popularity due to its potential applications. Until now, however, a
large-scale multimodal multi-party emotional conversational database containing
more than two speakers per dialogue was missing. Thus, we propose the
Multimodal EmotionLines Dataset (MELD), an extension and enhancement of
EmotionLines. MELD contains about 13,000 utterances from 1,433 dialogues from
the TV-series Friends. Each utterance is annotated with emotion and sentiment
labels, and encompasses audio, visual and textual modalities. We propose
several strong multimodal baselines and show the importance of contextual and
multimodal information for emotion recognition in conversations. The full
dataset is available for use at http:// affective-meld.github.io.
| true | true |
Poria, Soujanya and Hazarika, Devamanyu and Majumder, Navonil and Naik, Gautam and Cambria, Erik and Mihalcea, Rada
| 2,019 | null | null | null | null |
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in
Conversations
|
MELD: A Multimodal Multi-Party Dataset for Emotion ...
|
https://github.com/declare-lab/MELD
|
* /data/MELD/train_sent_emo.csv - contains the utterances in the training set along with Sentiment and Emotion labels. * /data/MELD/dev_sent_emo.csv - contains the utterances in the dev set along with Sentiment and Emotion labels. * /data/MELD/test_sent_emo.csv - contains the utterances in the test set along with Sentiment and Emotion labels. * /data/MELD_Dyadic/train_sent_emo_dya.csv - contains the utterances in the training set of the dyadic variant of MELD along with Sentiment and Emotion labels. * /data/MELD_Dyadic/test_sent_emo_dya.csv - contains the utterances in the test set of the dyadic variant along with Sentiment and Emotion labels. Each utterance in a dialogue has been labeled by any of these seven emotions -- Neutral, Joyful, Peaceful, Powerful, Scared, Mad and Sad. The annotations are borrowed from the original dataset.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
han2023:champagne
|
\cite{han2023:champagne}
|
CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
|
http://arxiv.org/abs/2303.09713v2
|
Visual information is central to conversation: body gestures and physical
behaviour, for example, contribute to meaning that transcends words alone. To
date, however, most neural conversational models are limited to just text. We
introduce CHAMPAGNE, a generative model of conversations that can account for
visual contexts. To train CHAMPAGNE, we collect and release YTD-18M, a
large-scale corpus of 18M video-based dialogues. YTD-18M is constructed from
web videos: crucial to our data collection pipeline is a pretrained language
model that converts error-prone automatic transcripts to a cleaner dialogue
format while maintaining meaning. Human evaluation reveals that YTD-18M is more
sensible and specific than prior resources (MMDialog, 1M dialogues), while
maintaining visual-groundedness. Experiments demonstrate that 1) CHAMPAGNE
learns to conduct conversation from YTD-18M; and 2) when fine-tuned, it
achieves state-of-the-art results on four vision-language tasks focused on
real-world conversations. We release data, models, and code.
| true | true |
Han, Seungju and Hessel, Jack and Dziri, Nouha and Choi, Yejin and Yu, Youngjae
| 2,023 | null | null | null | null |
CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
|
[PDF] Learning Real-world Conversation from Large-Scale Web Videos
|
https://openaccess.thecvf.com/content/ICCV2023/papers/Han_CHAMPAGNE_Learning_Real-world_Conversation_from_Large-Scale_Web_Videos_ICCV_2023_paper.pdf
|
Figure 1: CHAMPAGNE is a generative model of real-world conversational frames trained on. YTD-18M, a dataset of 18M video-based dialogues.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
park2024:let
|
\cite{park2024:let}
|
Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation
|
http://arxiv.org/abs/2406.07867v2
|
In this paper, we introduce a novel Face-to-Face spoken dialogue model. It
processes audio-visual speech from user input and generates audio-visual speech
as the response, marking the initial step towards creating an avatar chatbot
system without relying on intermediate text. To this end, we newly introduce
MultiDialog, the first large-scale multimodal (i.e., audio and visual) spoken
dialogue corpus containing 340 hours of approximately 9,000 dialogues, recorded
based on the open domain dialogue dataset, TopicalChat. The MultiDialog
contains parallel audio-visual recordings of conversation partners acting
according to the given script with emotion annotations, which we expect to open
up research opportunities in multimodal synthesis. Our Face-to-Face spoken
dialogue model incorporates a textually pretrained large language model and
adapts it into the audio-visual spoken dialogue domain by incorporating
speech-text joint pretraining. Through extensive experiments, we validate the
effectiveness of our model in facilitating a face-to-face conversation. Demo
and data are available at https://multidialog.github.io and
https://huggingface.co/datasets/IVLLab/MultiDialog, respectively.
| true | true |
Park, Se Jin and Kim, Chae Won and Rha, Hyeongseop and Kim, Minsu and Hong, Joanna and Yeo, Jeong Hun and Ro, Yong Man
| 2,024 | null | null | null |
arXiv preprint arXiv:2406.07867
|
Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation
|
Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face...
|
https://openreview.net/forum?id=zby4Ade9CCF
|
In this paper, we introduce a novel Face-to-Face spoken dialogue model. It processes audio-visual speech from user input and generates audio-visual speech as
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
shafique2023:nonverbal
|
\cite{shafique2023:nonverbal}
|
Nonverbal Communication Cue Recognition: A Pathway to More Accessible Communication
| null | null | true | false |
Shafique, Zoya and Wang, Haiyan and Tian, Yingli
| 2,023 | null | null | null | null |
Nonverbal Communication Cue Recognition: A Pathway to More Accessible Communication
|
[PDF] Nonverbal Communication Cue Recognition: A Pathway to More ...
|
https://openaccess.thecvf.com/content/CVPR2023W/WiCV/papers/Shafique_Nonverbal_Communication_Cue_Recognition_A_Pathway_to_More_Accessible_Communication_CVPRW_2023_paper.pdf
|
Nonverbal communication cues (NVCs) include body language, facial expressions, and hand gestures, conveying emotions and attitudes.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
zhang2023:learning
|
\cite{zhang2023:learning}
|
Learning Emotion Representations from Verbal and Nonverbal Communication
|
http://arxiv.org/abs/2305.13500v1
|
Emotion understanding is an essential but highly challenging component of
artificial general intelligence. The absence of extensively annotated datasets
has significantly impeded advancements in this field. We present EmotionCLIP,
the first pre-training paradigm to extract visual emotion representations from
verbal and nonverbal communication using only uncurated data. Compared to
numerical labels or descriptions used in previous methods, communication
naturally contains emotion information. Furthermore, acquiring emotion
representations from communication is more congruent with the human learning
process. We guide EmotionCLIP to attend to nonverbal emotion cues through
subject-aware context encoding and verbal emotion cues using sentiment-guided
contrastive learning. Extensive experiments validate the effectiveness and
transferability of EmotionCLIP. Using merely linear-probe evaluation protocol,
EmotionCLIP outperforms the state-of-the-art supervised visual emotion
recognition methods and rivals many multimodal approaches across various
benchmarks. We anticipate that the advent of EmotionCLIP will address the
prevailing issue of data scarcity in emotion understanding, thereby fostering
progress in related domains. The code and pre-trained models are available at
https://github.com/Xeaver/EmotionCLIP.
| true | true |
Zhang, Sitao and Pan, Yimu and Wang, James Z
| 2,023 | null | null | null | null |
Learning Emotion Representations from Verbal and Nonverbal Communication
|
Learning Emotion Representations from Verbal and Nonverbal Communication
|
http://arxiv.org/pdf/2305.13500v1
|
Emotion understanding is an essential but highly challenging component of
artificial general intelligence. The absence of extensively annotated datasets
has significantly impeded advancements in this field. We present EmotionCLIP,
the first pre-training paradigm to extract visual emotion representations from
verbal and nonverbal communication using only uncurated data. Compared to
numerical labels or descriptions used in previous methods, communication
naturally contains emotion information. Furthermore, acquiring emotion
representations from communication is more congruent with the human learning
process. We guide EmotionCLIP to attend to nonverbal emotion cues through
subject-aware context encoding and verbal emotion cues using sentiment-guided
contrastive learning. Extensive experiments validate the effectiveness and
transferability of EmotionCLIP. Using merely linear-probe evaluation protocol,
EmotionCLIP outperforms the state-of-the-art supervised visual emotion
recognition methods and rivals many multimodal approaches across various
benchmarks. We anticipate that the advent of EmotionCLIP will address the
prevailing issue of data scarcity in emotion understanding, thereby fostering
progress in related domains. The code and pre-trained models are available at
https://github.com/Xeaver/EmotionCLIP.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
cherakara2023:furchat
|
\cite{cherakara2023:furchat}
|
FurChat: An Embodied Conversational Agent using LLMs, Combining Open and
Closed-Domain Dialogue with Facial Expressions
|
http://arxiv.org/abs/2308.15214v2
|
We demonstrate an embodied conversational agent that can function as a
receptionist and generate a mixture of open and closed-domain dialogue along
with facial expressions, by using a large language model (LLM) to develop an
engaging conversation. We deployed the system onto a Furhat robot, which is
highly expressive and capable of using both verbal and nonverbal cues during
interaction. The system was designed specifically for the National Robotarium
to interact with visitors through natural conversations, providing them with
information about the facilities, research, news, upcoming events, etc. The
system utilises the state-of-the-art GPT-3.5 model to generate such information
along with domain-general conversations and facial expressions based on prompt
engineering.
| true | true |
Cherakara, Neeraj and Varghese, Finny and Shabana, Sheena and Nelson, Nivan and Karukayil, Abhiram and Kulothungan, Rohith and Farhan, Mohammed Afil and Nesset, Birthe and Moujahid, Meriam and Dinkar, Tanvi and others
| 2,023 | null | null | null | null |
FurChat: An Embodied Conversational Agent using LLMs, Combining Open and
Closed-Domain Dialogue with Facial Expressions
|
[PDF] FurChat: An Embodied Conversational Agent using LLMs ...
|
https://aclanthology.org/2023.sigdial-1.55.pdf
|
FurChat is an embodied conversational agent using LLMs, combining open and closed-domain dialogue with facial expressions, and can function as a receptionist.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
lee2023:developing
|
\cite{lee2023:developing}
|
Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models
|
http://arxiv.org/abs/2308.16529v1
|
We propose augmenting the empathetic capacities of social robots by
integrating non-verbal cues. Our primary contribution is the design and
labeling of four types of empathetic non-verbal cues, abbreviated as SAFE:
Speech, Action (gesture), Facial expression, and Emotion, in a social robot.
These cues are generated using a Large Language Model (LLM). We developed an
LLM-based conversational system for the robot and assessed its alignment with
social cues as defined by human counselors. Preliminary results show distinct
patterns in the robot's responses, such as a preference for calm and positive
social emotions like 'joy' and 'lively', and frequent nodding gestures. Despite
these tendencies, our approach has led to the development of a social robot
capable of context-aware and more authentic interactions. Our work lays the
groundwork for future studies on human-robot interactions, emphasizing the
essential role of both verbal and non-verbal cues in creating social and
empathetic robots.
| true | true |
Lee, Yoon Kyung and Jung, Yoonwon and Kang, Gyuyi and Hahn, Sowon
| 2,023 | null | null | null |
arXiv preprint arXiv:2308.16529
|
Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models
|
Developing Social Robots with Empathetic Non-Verbal Cues Using ...
|
https://www.researchgate.net/publication/373552152_Developing_Social_Robots_with_Empathetic_Non-Verbal_Cues_Using_Large_Language_Models
|
We developed an LLM-based conversational system for the robot and assessed its alignment with social cues as defined by human counselors. Preliminary results
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
lin2023:one
|
\cite{lin2023:one}
|
One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer
|
http://arxiv.org/abs/2303.16160v1
|
Whole-body mesh recovery aims to estimate the 3D human body, face, and hands
parameters from a single image. It is challenging to perform this task with a
single network due to resolution issues, i.e., the face and hands are usually
located in extremely small regions. Existing works usually detect hands and
faces, enlarge their resolution to feed in a specific network to predict the
parameter, and finally fuse the results. While this copy-paste pipeline can
capture the fine-grained details of the face and hands, the connections between
different parts cannot be easily recovered in late fusion, leading to
implausible 3D rotation and unnatural pose. In this work, we propose a
one-stage pipeline for expressive whole-body mesh recovery, named OSX, without
separate networks for each part. Specifically, we design a Component Aware
Transformer (CAT) composed of a global body encoder and a local face/hand
decoder. The encoder predicts the body parameters and provides a high-quality
feature map for the decoder, which performs a feature-level upsample-crop
scheme to extract high-resolution part-specific features and adopt
keypoint-guided deformable attention to estimate hand and face precisely. The
whole pipeline is simple yet effective without any manual post-processing and
naturally avoids implausible prediction. Comprehensive experiments demonstrate
the effectiveness of OSX. Lastly, we build a large-scale Upper-Body dataset
(UBody) with high-quality 2D and 3D whole-body annotations. It contains persons
with partially visible bodies in diverse real-life scenarios to bridge the gap
between the basic task and downstream applications.
| true | true |
Lin, Jing and Zeng, Ailing and Wang, Haoqian and Zhang, Lei and Li, Yu
| 2,023 | null | null | null | null |
One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer
|
IDEA-Research/OSX - GitHub
|
https://github.com/IDEA-Research/OSX
|
This repo is official PyTorch implementation of One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer (CVPR2023). We propose the first one-
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
dwivedi2024:tokenhmr
|
\cite{dwivedi2024:tokenhmr}
|
TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose
Representation
|
http://arxiv.org/abs/2404.16752v1
|
We address the problem of regressing 3D human pose and shape from a single
image, with a focus on 3D accuracy. The current best methods leverage large
datasets of 3D pseudo-ground-truth (p-GT) and 2D keypoints, leading to robust
performance. With such methods, we observe a paradoxical decline in 3D pose
accuracy with increasing 2D accuracy. This is caused by biases in the p-GT and
the use of an approximate camera projection model. We quantify the error
induced by current camera models and show that fitting 2D keypoints and p-GT
accurately causes incorrect 3D poses. Our analysis defines the invalid
distances within which minimizing 2D and p-GT losses is detrimental. We use
this to formulate a new loss Threshold-Adaptive Loss Scaling (TALS) that
penalizes gross 2D and p-GT losses but not smaller ones. With such a loss,
there are many 3D poses that could equally explain the 2D evidence. To reduce
this ambiguity we need a prior over valid human poses but such priors can
introduce unwanted bias. To address this, we exploit a tokenized representation
of human pose and reformulate the problem as token prediction. This restricts
the estimated poses to the space of valid poses, effectively providing a
uniform prior. Extensive experiments on the EMDB and 3DPW datasets show that
our reformulated keypoint loss and tokenization allows us to train on
in-the-wild data while improving 3D accuracy over the state-of-the-art. Our
models and code are available for research at https://tokenhmr.is.tue.mpg.de.
| true | true |
Dwivedi, Sai Kumar and Sun, Yu and Patel, Priyanka and Feng, Yao and Black, Michael J
| 2,024 | null | null | null | null |
TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose
Representation
|
TokenHMR: Advancing Human Mesh Recovery with a ...
|
https://github.com/saidwivedi/TokenHMR
|
Our method has two stages: Tokenization: The encoder maps continuous poses to discrete pose tokens. TokenHMR: During the training of human pose
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
danvevcek2022emoca
|
\cite{danvevcek2022emoca}
|
EMOCA: Emotion Driven Monocular Face Capture and Animation
|
http://arxiv.org/abs/2204.11312v1
|
As 3D facial avatars become more widely used for communication, it is
critical that they faithfully convey emotion. Unfortunately, the best recent
methods that regress parametric 3D face models from monocular images are unable
to capture the full spectrum of facial expression, such as subtle or extreme
emotions. We find the standard reconstruction metrics used for training
(landmark reprojection error, photometric error, and face recognition loss) are
insufficient to capture high-fidelity expressions. The result is facial
geometries that do not match the emotional content of the input image. We
address this with EMOCA (EMOtion Capture and Animation), by introducing a novel
deep perceptual emotion consistency loss during training, which helps ensure
that the reconstructed 3D expression matches the expression depicted in the
input image. While EMOCA achieves 3D reconstruction errors that are on par with
the current best methods, it significantly outperforms them in terms of the
quality of the reconstructed expression and the perceived emotional content. We
also directly regress levels of valence and arousal and classify basic
expressions from the estimated 3D face parameters. On the task of in-the-wild
emotion recognition, our purely geometric approach is on par with the best
image-based methods, highlighting the value of 3D geometry in analyzing human
behavior. The model and code are publicly available at
https://emoca.is.tue.mpg.de.
| true | true |
Dan{\v{e}}{\v{c}}ek, Radek and Black, Michael J and Bolkart, Timo
| 2,022 | null | null | null | null |
EMOCA: Emotion Driven Monocular Face Capture and Animation
|
EMOCA: Emotion Driven Monocular Face Capture and Animation
|
http://arxiv.org/pdf/2204.11312v1
|
As 3D facial avatars become more widely used for communication, it is
critical that they faithfully convey emotion. Unfortunately, the best recent
methods that regress parametric 3D face models from monocular images are unable
to capture the full spectrum of facial expression, such as subtle or extreme
emotions. We find the standard reconstruction metrics used for training
(landmark reprojection error, photometric error, and face recognition loss) are
insufficient to capture high-fidelity expressions. The result is facial
geometries that do not match the emotional content of the input image. We
address this with EMOCA (EMOtion Capture and Animation), by introducing a novel
deep perceptual emotion consistency loss during training, which helps ensure
that the reconstructed 3D expression matches the expression depicted in the
input image. While EMOCA achieves 3D reconstruction errors that are on par with
the current best methods, it significantly outperforms them in terms of the
quality of the reconstructed expression and the perceived emotional content. We
also directly regress levels of valence and arousal and classify basic
expressions from the estimated 3D face parameters. On the task of in-the-wild
emotion recognition, our purely geometric approach is on par with the best
image-based methods, highlighting the value of 3D geometry in analyzing human
behavior. The model and code are publicly available at
https://emoca.is.tue.mpg.de.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
yi2023:generating
|
\cite{yi2023:generating}
|
Generating Holistic 3D Human Motion from Speech
|
http://arxiv.org/abs/2212.04420v2
|
This work addresses the problem of generating 3D holistic body motions from
human speech. Given a speech recording, we synthesize sequences of 3D body
poses, hand gestures, and facial expressions that are realistic and diverse. To
achieve this, we first build a high-quality dataset of 3D holistic body meshes
with synchronous speech. We then define a novel speech-to-motion generation
framework in which the face, body, and hands are modeled separately. The
separated modeling stems from the fact that face articulation strongly
correlates with human speech, while body poses and hand gestures are less
correlated. Specifically, we employ an autoencoder for face motions, and a
compositional vector-quantized variational autoencoder (VQ-VAE) for the body
and hand motions. The compositional VQ-VAE is key to generating diverse
results. Additionally, we propose a cross-conditional autoregressive model that
generates body poses and hand gestures, leading to coherent and realistic
motions. Extensive experiments and user studies demonstrate that our proposed
approach achieves state-of-the-art performance both qualitatively and
quantitatively. Our novel dataset and code will be released for research
purposes at https://talkshow.is.tue.mpg.de.
| true | true |
Yi, Hongwei and Liang, Hualin and Liu, Yifei and Cao, Qiong and Wen, Yandong and Bolkart, Timo and Tao, Dacheng and Black, Michael J
| 2,023 | null | null | null | null |
Generating Holistic 3D Human Motion from Speech
|
Generating Holistic 3D Human Motion from Speech
|
http://arxiv.org/pdf/2212.04420v2
|
This work addresses the problem of generating 3D holistic body motions from
human speech. Given a speech recording, we synthesize sequences of 3D body
poses, hand gestures, and facial expressions that are realistic and diverse. To
achieve this, we first build a high-quality dataset of 3D holistic body meshes
with synchronous speech. We then define a novel speech-to-motion generation
framework in which the face, body, and hands are modeled separately. The
separated modeling stems from the fact that face articulation strongly
correlates with human speech, while body poses and hand gestures are less
correlated. Specifically, we employ an autoencoder for face motions, and a
compositional vector-quantized variational autoencoder (VQ-VAE) for the body
and hand motions. The compositional VQ-VAE is key to generating diverse
results. Additionally, we propose a cross-conditional autoregressive model that
generates body poses and hand gestures, leading to coherent and realistic
motions. Extensive experiments and user studies demonstrate that our proposed
approach achieves state-of-the-art performance both qualitatively and
quantitatively. Our novel dataset and code will be released for research
purposes at https://talkshow.is.tue.mpg.de.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
wu2024:motionllm
|
\cite{wu2024:motionllm}
|
MotionLLM: Multimodal Motion-Language Learning with Large Language Models
| null | null | true | false |
Wu, Qi and Zhao, Yubo and Wang, Yifan and Tai, Yu-Wing and Tang, Chi-Keung
| 2,024 | null | null | null |
arXiv preprint arXiv:2405.17013
|
MotionLLM: Multimodal Motion-Language Learning with Large Language Models
|
(PDF) MotionLLM: Multimodal Motion-Language Learning ...
|
https://www.researchgate.net/publication/380906869_MotionLLM_Multimodal_Motion-Language_Learning_with_Large_Language_Models
|
MotionGPT-2 accommodates multiple motion-relevant tasks and supporting multimodal control conditions through pre-trained Large Language Models (
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
lu2023:humantomato
|
\cite{lu2023:humantomato}
|
HumanTOMATO: Text-aligned Whole-body Motion Generation
|
http://arxiv.org/abs/2310.12978v1
|
This work targets a novel text-driven whole-body motion generation task,
which takes a given textual description as input and aims at generating
high-quality, diverse, and coherent facial expressions, hand gestures, and body
motions simultaneously. Previous works on text-driven motion generation tasks
mainly have two limitations: they ignore the key role of fine-grained hand and
face controlling in vivid whole-body motion generation, and lack a good
alignment between text and motion. To address such limitations, we propose a
Text-aligned whOle-body Motion generATiOn framework, named HumanTOMATO, which
is the first attempt to our knowledge towards applicable holistic motion
generation in this research area. To tackle this challenging task, our solution
includes two key designs: (1) a Holistic Hierarchical VQ-VAE (aka H$^2$VQ) and
a Hierarchical-GPT for fine-grained body and hand motion reconstruction and
generation with two structured codebooks; and (2) a pre-trained
text-motion-alignment model to help generated motion align with the input
textual description explicitly. Comprehensive experiments verify that our model
has significant advantages in both the quality of generated motions and their
alignment with text.
| true | true |
Lu, Shunlin and Chen, Ling-Hao and Zeng, Ailing and Lin, Jing and Zhang, Ruimao and Zhang, Lei and Shum, Heung-Yeung
| 2,023 | null | null | null |
arXiv preprint arXiv:2310.12978
|
HumanTOMATO: Text-aligned Whole-body Motion Generation
|
HumanTOMATO: Text-aligned Whole-body Motion ...
|
https://lhchen.top/HumanTOMATO/
|
The proposed HumanTOMATO model can generate text-aligned whole-body motions with vivid and harmonious face, hand, and body motion.
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
ng2023:can
|
\cite{ng2023:can}
|
Can Language Models Learn to Listen?
|
http://arxiv.org/abs/2308.10897v1
|
We present a framework for generating appropriate facial responses from a
listener in dyadic social interactions based on the speaker's words. Given an
input transcription of the speaker's words with their timestamps, our approach
autoregressively predicts a response of a listener: a sequence of listener
facial gestures, quantized using a VQ-VAE. Since gesture is a language
component, we propose treating the quantized atomic motion elements as
additional language token inputs to a transformer-based large language model.
Initializing our transformer with the weights of a language model pre-trained
only on text results in significantly higher quality listener responses than
training a transformer from scratch. We show that our generated listener motion
is fluent and reflective of language semantics through quantitative metrics and
a qualitative user study. In our evaluation, we analyze the model's ability to
utilize temporal and semantic aspects of spoken text. Project page:
https://people.eecs.berkeley.edu/~evonne_ng/projects/text2listen/
| true | true |
Ng, Evonne and Subramanian, Sanjay and Klein, Dan and Kanazawa, Angjoo and Darrell, Trevor and Ginosar, Shiry
| 2,023 | null | null | null | null |
Can Language Models Learn to Listen?
|
Can Language Models Learn to Listen?
|
http://arxiv.org/pdf/2308.10897v1
|
We present a framework for generating appropriate facial responses from a
listener in dyadic social interactions based on the speaker's words. Given an
input transcription of the speaker's words with their timestamps, our approach
autoregressively predicts a response of a listener: a sequence of listener
facial gestures, quantized using a VQ-VAE. Since gesture is a language
component, we propose treating the quantized atomic motion elements as
additional language token inputs to a transformer-based large language model.
Initializing our transformer with the weights of a language model pre-trained
only on text results in significantly higher quality listener responses than
training a transformer from scratch. We show that our generated listener motion
is fluent and reflective of language semantics through quantitative metrics and
a qualitative user study. In our evaluation, we analyze the model's ability to
utilize temporal and semantic aspects of spoken text. Project page:
https://people.eecs.berkeley.edu/~evonne_ng/projects/text2listen/
|
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues
|
2506.00958v1
|
ng2022:learning
|
\cite{ng2022:learning}
|
Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion
|
http://arxiv.org/abs/2204.08451v1
|
We present a framework for modeling interactional communication in dyadic
conversations: given multimodal inputs of a speaker, we autoregressively output
multiple possibilities of corresponding listener motion. We combine the motion
and speech audio of the speaker using a motion-audio cross attention
transformer. Furthermore, we enable non-deterministic prediction by learning a
discrete latent representation of realistic listener motion with a novel
motion-encoding VQ-VAE. Our method organically captures the multimodal and
non-deterministic nature of nonverbal dyadic interactions. Moreover, it
produces realistic 3D listener facial motion synchronous with the speaker (see
video). We demonstrate that our method outperforms baselines qualitatively and
quantitatively via a rich suite of experiments. To facilitate this line of
research, we introduce a novel and large in-the-wild dataset of dyadic
conversations. Code, data, and videos available at
https://evonneng.github.io/learning2listen/.
| true | true |
Ng, Evonne and Joo, Hanbyul and Hu, Liwen and Li, Hao and Darrell, Trevor and Kanazawa, Angjoo and Ginosar, Shiry
| 2,022 | null | null | null | null |
Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion
|
[PDF] Learning To Listen: Modeling Non-Deterministic Dyadic Facial Motion
|
https://openaccess.thecvf.com/content/CVPR2022/papers/Ng_Learning_To_Listen_Modeling_Non-Deterministic_Dyadic_Facial_Motion_CVPR_2022_paper.pdf
|
The method synthesizes listener motion from speaker video using a motion-audio transformer and a VQ-VAE, outputting multiple possibilities of listener motion.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
strom2006expressive
|
\cite{strom2006expressive}
|
Expressive prosody for unit-selection speech synthesis.
| null | null | true | false |
Strom, Volker and Clark, Robert AJ and King, Simon
| 2,006 | null | null | null | null |
Expressive prosody for unit-selection speech synthesis.
|
Expressive Prosody for Unit-selection Speech Synthesis - CSTR
|
https://www.cstr.ed.ac.uk/downloads/publications/2006/strom06.pdf
|
by V Strom · Cited by 42 — The Festival unit selection speech synthesis system, Multisyn [1], achieves highly natural synthetic speech by avoiding use of an ex- plicit model of prosody in
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
ren2019fastspeech
|
\cite{ren2019fastspeech}
|
FastSpeech: Fast, Robust and Controllable Text to Speech
|
http://arxiv.org/abs/1905.09263v5
|
Neural network based end-to-end text to speech (TTS) has significantly
improved the quality of synthesized speech. Prominent methods (e.g., Tacotron
2) usually first generate mel-spectrogram from text, and then synthesize speech
from the mel-spectrogram using vocoder such as WaveNet. Compared with
traditional concatenative and statistical parametric approaches, neural network
based end-to-end models suffer from slow inference speed, and the synthesized
speech is usually not robust (i.e., some words are skipped or repeated) and
lack of controllability (voice speed or prosody control). In this work, we
propose a novel feed-forward network based on Transformer to generate
mel-spectrogram in parallel for TTS. Specifically, we extract attention
alignments from an encoder-decoder based teacher model for phoneme duration
prediction, which is used by a length regulator to expand the source phoneme
sequence to match the length of the target mel-spectrogram sequence for
parallel mel-spectrogram generation. Experiments on the LJSpeech dataset show
that our parallel model matches autoregressive models in terms of speech
quality, nearly eliminates the problem of word skipping and repeating in
particularly hard cases, and can adjust voice speed smoothly. Most importantly,
compared with autoregressive Transformer TTS, our model speeds up
mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.
Therefore, we call our model FastSpeech.
| true | true |
Ren, Yi and Ruan, Yangjun and Tan, Xu and Qin, Tao and Zhao, Sheng and Zhao, Zhou and Liu, Tie-Yan
| 2,019 | null | null | null |
Advances in neural information processing systems
|
FastSpeech: Fast, Robust and Controllable Text to Speech
|
FastSpeech: Fast, Robust and Controllable Text to Speech
|
http://arxiv.org/pdf/1905.09263v5
|
Neural network based end-to-end text to speech (TTS) has significantly
improved the quality of synthesized speech. Prominent methods (e.g., Tacotron
2) usually first generate mel-spectrogram from text, and then synthesize speech
from the mel-spectrogram using vocoder such as WaveNet. Compared with
traditional concatenative and statistical parametric approaches, neural network
based end-to-end models suffer from slow inference speed, and the synthesized
speech is usually not robust (i.e., some words are skipped or repeated) and
lack of controllability (voice speed or prosody control). In this work, we
propose a novel feed-forward network based on Transformer to generate
mel-spectrogram in parallel for TTS. Specifically, we extract attention
alignments from an encoder-decoder based teacher model for phoneme duration
prediction, which is used by a length regulator to expand the source phoneme
sequence to match the length of the target mel-spectrogram sequence for
parallel mel-spectrogram generation. Experiments on the LJSpeech dataset show
that our parallel model matches autoregressive models in terms of speech
quality, nearly eliminates the problem of word skipping and repeating in
particularly hard cases, and can adjust voice speed smoothly. Most importantly,
compared with autoregressive Transformer TTS, our model speeds up
mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.
Therefore, we call our model FastSpeech.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
ren2020fastspeech
|
\cite{ren2020fastspeech}
|
FastSpeech 2: Fast and High-Quality End-to-End Text to Speech
|
http://arxiv.org/abs/2006.04558v8
|
Non-autoregressive text to speech (TTS) models such as FastSpeech can
synthesize speech significantly faster than previous autoregressive models with
comparable quality. The training of FastSpeech model relies on an
autoregressive teacher model for duration prediction (to provide more
information as input) and knowledge distillation (to simplify the data
distribution in output), which can ease the one-to-many mapping problem (i.e.,
multiple speech variations correspond to the same text) in TTS. However,
FastSpeech has several disadvantages: 1) the teacher-student distillation
pipeline is complicated and time-consuming, 2) the duration extracted from the
teacher model is not accurate enough, and the target mel-spectrograms distilled
from teacher model suffer from information loss due to data simplification,
both of which limit the voice quality. In this paper, we propose FastSpeech 2,
which addresses the issues in FastSpeech and better solves the one-to-many
mapping problem in TTS by 1) directly training the model with ground-truth
target instead of the simplified output from teacher, and 2) introducing more
variation information of speech (e.g., pitch, energy and more accurate
duration) as conditional inputs. Specifically, we extract duration, pitch and
energy from speech waveform and directly take them as conditional inputs in
training and use predicted values in inference. We further design FastSpeech
2s, which is the first attempt to directly generate speech waveform from text
in parallel, enjoying the benefit of fully end-to-end inference. Experimental
results show that 1) FastSpeech 2 achieves a 3x training speed-up over
FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech
2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even
surpass autoregressive models. Audio samples are available at
https://speechresearch.github.io/fastspeech2/.
| true | true |
Ren, Yi and Hu, Chenxu and Tan, Xu and Qin, Tao and Zhao, Sheng and Zhao, Zhou and Liu, Tie-Yan
| 2,020 | null | null | null |
arXiv preprint arXiv:2006.04558
|
FastSpeech 2: Fast and High-Quality End-to-End Text to Speech
|
FastSpeech 2: Fast and High-Quality End-to-End Text to Speech
|
https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/articles/fastspeech-2-fast-and-high-quality-end-to-end-text-to-speech/
|
FastSpeech 2 outperforms FastSpeech in voice quality and enjoys a much simpler training pipeline (3x training time reduction) while inheriting its advantages.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
mohan2021ctrl
|
\cite{mohan2021ctrl}
|
Ctrl-P: Temporal control of prosodic variation for speech synthesis
| null | null | true | false |
Mohan, Devang S Ram and Hu, Vivian and Teh, Tian Huey and Torresquintero, Alexandra and Wallis, Christopher GR and Staib, Marlene and Foglianti, Lorenzo and Gao, Jiameng and King, Simon
| 2,021 | null | null | null |
arXiv preprint arXiv:2106.08352
|
Ctrl-P: Temporal control of prosodic variation for speech synthesis
|
Ctrl-P: Temporal Control of Prosodic Variation for Speech Synthesis
|
http://arxiv.org/pdf/2106.08352v1
|
Text does not fully specify the spoken form, so text-to-speech models must be
able to learn from speech data that vary in ways not explained by the
corresponding text. One way to reduce the amount of unexplained variation in
training data is to provide acoustic information as an additional learning
signal. When generating speech, modifying this acoustic information enables
multiple distinct renditions of a text to be produced.
Since much of the unexplained variation is in the prosody, we propose a model
that generates speech explicitly conditioned on the three primary acoustic
correlates of prosody: $F_{0}$, energy and duration. The model is flexible
about how the values of these features are specified: they can be externally
provided, or predicted from text, or predicted then subsequently modified.
Compared to a model that employs a variational auto-encoder to learn
unsupervised latent features, our model provides more interpretable,
temporally-precise, and disentangled control. When automatically predicting the
acoustic features from text, it generates speech that is more natural than that
from a Tacotron 2 model with reference encoder. Subsequent human-in-the-loop
modification of the predicted acoustic features can significantly further
increase naturalness.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
bandekar2023speaking
|
\cite{bandekar2023speaking}
|
Speaking rate attention-based duration prediction for speed control TTS
|
http://arxiv.org/abs/2310.08846v1
|
With the advent of high-quality speech synthesis, there is a lot of interest
in controlling various prosodic attributes of speech. Speaking rate is an
essential attribute towards modelling the expressivity of speech. In this work,
we propose a novel approach to control the speaking rate for non-autoregressive
TTS. We achieve this by conditioning the speaking rate inside the duration
predictor, allowing implicit speaking rate control. We show the benefits of
this approach by synthesising audio at various speaking rate factors and
measuring the quality of speaking rate-controlled synthesised speech. Further,
we study the effect of the speaking rate distribution of the training data
towards effective rate control. Finally, we fine-tune a baseline pretrained TTS
model to obtain speaking rate control TTS. We provide various analyses to
showcase the benefits of using this proposed approach, along with objective as
well as subjective metrics. We find that the proposed methods have higher
subjective scores and lower speaker rate errors across many speaking rate
factors over the baseline.
| true | true |
Bandekar, Jesuraj and Udupa, Sathvik and Singh, Abhayjeet and Jayakumar, Anjali and Badiger, Sandhya and Kumar, Saurabh and VH, Pooja and Ghosh, Prasanta Kumar and others
| 2,023 | null | null | null |
arXiv preprint arXiv:2310.08846
|
Speaking rate attention-based duration prediction for speed control TTS
|
Speaking Rate Control of end-to-end TTS Models by Direct ...
|
https://www.isca-archive.org/interspeech_2022/lenglet22_interspeech.pdf
|
by M Lenglet · 2022 · Cited by 8 — Evaluation was performed on the control of speaking rate on both attention-based (TC) and duration predictor based (FS) methods. Objective analyses showed
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
wang2018style
|
\cite{wang2018style}
|
Style Tokens: Unsupervised Style Modeling, Control and Transfer in
End-to-End Speech Synthesis
|
http://arxiv.org/abs/1803.09017v1
|
In this work, we propose "global style tokens" (GSTs), a bank of embeddings
that are jointly trained within Tacotron, a state-of-the-art end-to-end speech
synthesis system. The embeddings are trained with no explicit labels, yet learn
to model a large range of acoustic expressiveness. GSTs lead to a rich set of
significant results. The soft interpretable "labels" they generate can be used
to control synthesis in novel ways, such as varying speed and speaking style -
independently of the text content. They can also be used for style transfer,
replicating the speaking style of a single audio clip across an entire
long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn
to factorize noise and speaker identity, providing a path towards highly
scalable but robust speech synthesis.
| true | true |
Wang, Yuxuan and Stanton, Daisy and Zhang, Yu and Ryan, RJ-Skerry and Battenberg, Eric and Shor, Joel and Xiao, Ying and Jia, Ye and Ren, Fei and Saurous, Rif A
| 2,018 | null | null | null | null |
Style Tokens: Unsupervised Style Modeling, Control and Transfer in
End-to-End Speech Synthesis
|
Unsupervised Style Modeling, Control and Transfer in End- ...
|
https://research.google/pubs/style-tokens-unsupervised-style-modeling-control-and-transfer-in-end-to-end-speech-synthesis/
|
by Y Wang · Cited by 1080 — In this work, we propose “global style tokens”(GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-to-end speech
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
skerry2018towards
|
\cite{skerry2018towards}
|
Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with
Tacotron
|
http://arxiv.org/abs/1803.09047v1
|
We present an extension to the Tacotron speech synthesis architecture that
learns a latent embedding space of prosody, derived from a reference acoustic
representation containing the desired prosody. We show that conditioning
Tacotron on this learned embedding space results in synthesized audio that
matches the prosody of the reference signal with fine time detail even when the
reference and synthesis speakers are different. Additionally, we show that a
reference prosody embedding can be used to synthesize text that is different
from that of the reference utterance. We define several quantitative and
subjective metrics for evaluating prosody transfer, and report results with
accompanying audio samples from single-speaker and 44-speaker Tacotron models
on a prosody transfer task.
| true | true |
Skerry-Ryan, RJ and Battenberg, Eric and Xiao, Ying and Wang, Yuxuan and Stanton, Daisy and Shor, Joel and Weiss, Ron and Clark, Rob and Saurous, Rif A
| 2,018 | null | null | null | null |
Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with
Tacotron
|
[PDF] Towards End-to-End Prosody Transfer for Expressive Speech ...
|
https://proceedings.mlr.press/v80/skerry-ryan18a/skerry-ryan18a.pdf
|
Abstract. We present an extension to the Tacotron speech synthesis architecture that learns a latent embed- ding space of prosody, derived from a reference.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
hsu2018hierarchical
|
\cite{hsu2018hierarchical}
|
Hierarchical Generative Modeling for Controllable Speech Synthesis
|
http://arxiv.org/abs/1810.07217v2
|
This paper proposes a neural sequence-to-sequence text-to-speech (TTS) model
which can control latent attributes in the generated speech that are rarely
annotated in the training data, such as speaking style, accent, background
noise, and recording conditions. The model is formulated as a conditional
generative model based on the variational autoencoder (VAE) framework, with two
levels of hierarchical latent variables. The first level is a categorical
variable, which represents attribute groups (e.g. clean/noisy) and provides
interpretability. The second level, conditioned on the first, is a multivariate
Gaussian variable, which characterizes specific attribute configurations (e.g.
noise level, speaking rate) and enables disentangled fine-grained control over
these attributes. This amounts to using a Gaussian mixture model (GMM) for the
latent distribution. Extensive evaluation demonstrates its ability to control
the aforementioned attributes. In particular, we train a high-quality
controllable TTS model on real found data, which is capable of inferring
speaker and style attributes from a noisy utterance and use it to synthesize
clean speech with controllable speaking style.
| true | true |
Hsu, Wei-Ning and Zhang, Yu and Weiss, Ron J and Zen, Heiga and Wu, Yonghui and Wang, Yuxuan and Cao, Yuan and Jia, Ye and Chen, Zhifeng and Shen, Jonathan and others
| 2,018 | null | null | null |
arXiv preprint arXiv:1810.07217
|
Hierarchical Generative Modeling for Controllable Speech Synthesis
|
Hierarchical Generative Modeling for Controllable Speech Synthesis
|
http://arxiv.org/pdf/1810.07217v2
|
This paper proposes a neural sequence-to-sequence text-to-speech (TTS) model
which can control latent attributes in the generated speech that are rarely
annotated in the training data, such as speaking style, accent, background
noise, and recording conditions. The model is formulated as a conditional
generative model based on the variational autoencoder (VAE) framework, with two
levels of hierarchical latent variables. The first level is a categorical
variable, which represents attribute groups (e.g. clean/noisy) and provides
interpretability. The second level, conditioned on the first, is a multivariate
Gaussian variable, which characterizes specific attribute configurations (e.g.
noise level, speaking rate) and enables disentangled fine-grained control over
these attributes. This amounts to using a Gaussian mixture model (GMM) for the
latent distribution. Extensive evaluation demonstrates its ability to control
the aforementioned attributes. In particular, we train a high-quality
controllable TTS model on real found data, which is capable of inferring
speaker and style attributes from a noisy utterance and use it to synthesize
clean speech with controllable speaking style.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
lenglet2022speaking
|
\cite{lenglet2022speaking}
|
Speaking Rate Control of end-to-end TTS Models by Direct Manipulation of the Encoder's Output Embeddings
| null | null | true | false |
Lenglet, Martin and Perrotin, Olivier and Bailly, G{\'e}rard
| 2,022 | null | null | null | null |
Speaking Rate Control of end-to-end TTS Models by Direct Manipulation of the Encoder's Output Embeddings
|
Speaking Rate Control of end-to-end TTS Models by ... - ISCA Archive
|
https://www.isca-archive.org/interspeech_2022/lenglet22_interspeech.html
|
Experimental results show that the control provided by embeddings reproduces a behaviour closer to natural speech data.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
zhang2020unified
|
\cite{zhang2020unified}
|
Unified Mandarin TTS Front-end Based on Distilled BERT Model
|
http://arxiv.org/abs/2012.15404v1
|
The front-end module in a typical Mandarin text-to-speech system (TTS) is
composed of a long pipeline of text processing components, which requires
extensive efforts to build and is prone to large accumulative model size and
cascade errors. In this paper, a pre-trained language model (PLM) based model
is proposed to simultaneously tackle the two most important tasks in TTS
front-end, i.e., prosodic structure prediction (PSP) and grapheme-to-phoneme
(G2P) conversion. We use a pre-trained Chinese BERT[1] as the text encoder and
employ multi-task learning technique to adapt it to the two TTS front-end
tasks. Then, the BERT encoder is distilled into a smaller model by employing a
knowledge distillation technique called TinyBERT[2], making the whole model
size 25% of that of benchmark pipeline models while maintaining competitive
performance on both tasks. With the proposed the methods, we are able to run
the whole TTS front-end module in a light and unified manner, which is more
friendly to deployment on mobile devices.
| true | true |
Zhang, Yang and Deng, Liqun and Wang, Yasheng
| 2,020 | null | null | null |
arXiv preprint arXiv:2012.15404
|
Unified Mandarin TTS Front-end Based on Distilled BERT Model
|
Unified Mandarin TTS Front-end Based on Distilled BERT Model
|
https://arxiv.org/abs/2012.15404
|
We use a pre-trained Chinese BERT[1] as the text encoder and employ multi-task learning technique to adapt it to the two TTS front-end tasks.
|
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models
|
2506.00832v1
|
fong2022speech
|
\cite{fong2022speech}
|
Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech
| null | null | true | false |
Fong, Jason and Lyth, Daniel and Henter, Gustav Eje and Tang, Hao and King, Simon
| 2,022 | null | null | null | null |
Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech
|
[PDF] using speech from non-target speakers for one-off correction of ...
|
https://www.research.ed.ac.uk/files/364801102/Speech_Audio_Corrector_FONG_DOA13062022_VOR.pdf
|
Missing: 04/08/2025
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
yoo2022ground
|
\cite{yoo2022ground}
|
Ground-Truth Labels Matter: A Deeper Look into Input-Label
Demonstrations
|
http://arxiv.org/abs/2205.12685v2
|
Despite recent explosion of interests in in-context learning, the underlying
mechanism and the precise impact of the quality of demonstrations remain
elusive. Intuitively, ground-truth labels should have as much impact in
in-context learning (ICL) as supervised learning, but recent work reported that
the input-label correspondence is significantly less important than previously
thought. Intrigued by this counter-intuitive observation, we re-examine the
importance of ground-truth labels in in-context learning. With the introduction
of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth
Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the
impact of ground-truth label demonstrations. Through extensive analyses, we
find that the correct input-label mappings can have varying impacts on the
downstream in-context learning performances, depending on the experimental
configuration. Through additional studies, we identify key components, such as
the verbosity of prompt templates and the language model size, as the
controlling factor to achieve more noise-resilient ICL.
| true | true |
Yoo, Kang Min and Kim, Junyeob and Kim, Hyuhng Joon and Cho, Hyunsoo and Jo, Hwiyeol and Lee, Sang-Woo and Lee, Sang-goo and Kim, Taeuk
| 2,022 | null | null | null | null |
Ground-Truth Labels Matter: A Deeper Look into Input-Label
Demonstrations
|
Ground-Truth Labels Matter: A Deeper Look into Input- ...
|
https://aclanthology.org/2022.emnlp-main.155.pdf
|
by KM Yoo · 2022 · Cited by 100 — We propose two new quantifiable metrics, sen- sitivity and GLER, to measure the impact of ground-truth label demonstrations on ICL. • We conduct
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
o2023contrastive
|
\cite{o2023contrastive}
|
Contrastive Decoding Improves Reasoning in Large Language Models
|
http://arxiv.org/abs/2309.09117v2
|
We demonstrate that Contrastive Decoding -- a simple, computationally light,
and training-free text generation method proposed by Li et al 2022 -- achieves
large out-of-the-box improvements over greedy decoding on a variety of
reasoning tasks. Originally shown to improve the perceived quality of long-form
text generation, Contrastive Decoding searches for strings that maximize a
weighted difference in likelihood between strong and weak models. We show that
Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM
2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA
2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in
addition to improvements on a collection of other tasks. Analysis suggests that
Contrastive Decoding improves over existing methods by preventing some abstract
reasoning errors, as well as by avoiding simpler modes such as copying sections
of the input during chain-of-thought. Overall, Contrastive Decoding outperforms
nucleus sampling for long-form generation and greedy decoding for reasoning
tasks, making it a powerful general purpose method for generating text from
language models.
| true | true |
O'Brien, Sean and Lewis, Mike
| 2,023 | null | null | null |
arXiv preprint arXiv:2309.09117
|
Contrastive Decoding Improves Reasoning in Large Language Models
|
Contrastive Decoding Improves Reasoning in Large Language Models
|
http://arxiv.org/pdf/2309.09117v2
|
We demonstrate that Contrastive Decoding -- a simple, computationally light,
and training-free text generation method proposed by Li et al 2022 -- achieves
large out-of-the-box improvements over greedy decoding on a variety of
reasoning tasks. Originally shown to improve the perceived quality of long-form
text generation, Contrastive Decoding searches for strings that maximize a
weighted difference in likelihood between strong and weak models. We show that
Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM
2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA
2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in
addition to improvements on a collection of other tasks. Analysis suggests that
Contrastive Decoding improves over existing methods by preventing some abstract
reasoning errors, as well as by avoiding simpler modes such as copying sections
of the input during chain-of-thought. Overall, Contrastive Decoding outperforms
nucleus sampling for long-form generation and greedy decoding for reasoning
tasks, making it a powerful general purpose method for generating text from
language models.
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
li2023unified
|
\cite{li2023unified}
|
Unified Demonstration Retriever for In-Context Learning
|
http://arxiv.org/abs/2305.04320v2
|
In-context learning is a new learning paradigm where a language model
conditions on a few input-output pairs (demonstrations) and a test input, and
directly outputs the prediction. It has been shown highly dependent on the
provided demonstrations and thus promotes the research of demonstration
retrieval: given a test input, relevant examples are retrieved from the
training set to serve as informative demonstrations for in-context learning.
While previous works focus on training task-specific retrievers for several
tasks separately, these methods are often hard to transfer and scale on various
tasks, and separately trained retrievers incur a lot of parameter storage and
deployment cost. In this paper, we propose Unified Demonstration Retriever
(\textbf{UDR}), a single model to retrieve demonstrations for a wide range of
tasks. To train UDR, we cast various tasks' training signals into a unified
list-wise ranking formulation by language model's feedback. Then we propose a
multi-task list-wise ranking training framework, with an iterative mining
strategy to find high-quality candidates, which can help UDR fully incorporate
various tasks' signals. Experiments on 30+ tasks across 13 task families and
multiple data domains show that UDR significantly outperforms baselines.
Further analyses show the effectiveness of each proposed component and UDR's
strong ability in various scenarios including different LMs (1.3B - 175B),
unseen datasets, varying demonstration quantities, etc.
| true | true |
Li, Xiaonan and Lv, Kai and Yan, Hang and Lin, Tianyang and Zhu, Wei and Ni, Yuan and Xie, Guotong and Wang, Xiaoling and Qiu, Xipeng
| 2,023 | null | null | null | null |
Unified Demonstration Retriever for In-Context Learning
|
Unified Demonstration Retriever for In-Context Learning
|
https://aclanthology.org/2023.acl-long.256/
|
In this paper, we propose Unified Demonstration Retriever (UDR), a single model to retrieve demonstrations for a wide range of tasks.
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
liucontext
|
\cite{liucontext}
|
In-context Vectors: Making In Context Learning More Effective and
Controllable Through Latent Space Steering
|
http://arxiv.org/abs/2311.06668v3
|
Large language models (LLMs) demonstrate emergent in-context learning
capabilities, where they adapt to new tasks based on example demonstrations.
However, in-context learning has seen limited effectiveness in many settings,
is difficult to quantitatively control and takes up context window space. To
overcome these limitations, we propose an alternative approach that recasts
in-context learning as in-context vectors (ICV). Using ICV has two steps. We
first use a forward pass on demonstration examples to create the in-context
vector from the latent embedding of the LLM. This vector captures essential
information about the intended task. On a new query, instead of adding
demonstrations to the prompt, we shift the latent states of the LLM using the
ICV. The ICV approach has several benefits: 1) it enables the LLM to more
effectively follow the demonstration examples; 2) it's easy to control by
adjusting the magnitude of the ICV; 3) it reduces the length of the prompt by
removing the in-context demonstrations; 4) ICV is computationally much more
efficient than fine-tuning. We demonstrate that ICV achieves better performance
compared to standard in-context learning and fine-tuning on diverse tasks
including safety, style transfer, role-playing and formatting. Moreover, we
show that we can flexibly teach LLM to simultaneously follow different types of
instructions by simple vector arithmetics on the corresponding ICVs.
| true | true |
Liu, Sheng and Ye, Haotian and Xing, Lei and Zou, James Y
| null | null | null | null | null |
In-context Vectors: Making In Context Learning More Effective and
Controllable Through Latent Space Steering
|
Making In Context Learning More Effective and ...
|
https://consensus.app/papers/incontext-vectors-making-in-context-learning-more-zou-liu/20a28c8387155fa1ac876aad9841f1ee
|
Key takeaway: 'In-context vectors (ICV) improve in-context learning effectiveness, controllability, and computational efficiency in large
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.