parent_paper_title
stringclasses 63
values | parent_paper_arxiv_id
stringclasses 63
values | citation_shorthand
stringlengths 2
56
| raw_citation_text
stringlengths 9
63
| cited_paper_title
stringlengths 5
161
| cited_paper_arxiv_link
stringlengths 32
37
⌀ | cited_paper_abstract
stringlengths 406
1.92k
⌀ | has_metadata
bool 1
class | is_arxiv_paper
bool 2
classes | bib_paper_authors
stringlengths 2
2.44k
⌀ | bib_paper_year
float64 1.97k
2.03k
⌀ | bib_paper_month
stringclasses 16
values | bib_paper_url
stringlengths 20
116
⌀ | bib_paper_doi
stringclasses 269
values | bib_paper_journal
stringlengths 3
148
⌀ | original_title
stringlengths 5
161
| search_res_title
stringlengths 4
122
| search_res_url
stringlengths 22
267
| search_res_content
stringlengths 19
1.92k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
levesque2012winograd
|
\cite{levesque2012winograd}
|
The Defeat of the Winograd Schema Challenge
|
http://arxiv.org/abs/2201.02387v3
|
The Winograd Schema Challenge - a set of twin sentences involving pronoun
reference disambiguation that seem to require the use of commonsense knowledge
- was proposed by Hector Levesque in 2011. By 2019, a number of AI systems,
based on large pre-trained transformer-based language models and fine-tuned on
these kinds of problems, achieved better than 90% accuracy. In this paper, we
review the history of the Winograd Schema Challenge and discuss the lasting
contributions of the flurry of research that has taken place on the WSC in the
last decade. We discuss the significance of various datasets developed for WSC,
and the research community's deeper understanding of the role of surrogate
tasks in assessing the intelligence of an AI system.
| true | true |
Levesque, Hector and Davis, Ernest and Morgenstern, Leora
| 2,012 | null | null | null | null |
The Defeat of the Winograd Schema Challenge
|
The Defeat of the Winograd Schema Challenge
|
http://arxiv.org/pdf/2201.02387v3
|
The Winograd Schema Challenge - a set of twin sentences involving pronoun
reference disambiguation that seem to require the use of commonsense knowledge
- was proposed by Hector Levesque in 2011. By 2019, a number of AI systems,
based on large pre-trained transformer-based language models and fine-tuned on
these kinds of problems, achieved better than 90% accuracy. In this paper, we
review the history of the Winograd Schema Challenge and discuss the lasting
contributions of the flurry of research that has taken place on the WSC in the
last decade. We discuss the significance of various datasets developed for WSC,
and the research community's deeper understanding of the role of surrogate
tasks in assessing the intelligence of an AI system.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
zhao2018gender
|
\cite{zhao2018gender}
|
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
|
http://arxiv.org/abs/1804.06876v1
|
We introduce a new benchmark, WinoBias, for coreference resolution focused on
gender bias. Our corpus contains Winograd-schema style sentences with entities
corresponding to people referred by their occupation (e.g. the nurse, the
doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a
neural coreference system all link gendered pronouns to pro-stereotypical
entities with higher accuracy than anti-stereotypical entities, by an average
difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation
approach that, in combination with existing word-embedding debiasing
techniques, removes the bias demonstrated by these systems in WinoBias without
significantly affecting their performance on existing coreference benchmark
datasets. Our dataset and code are available at http://winobias.org.
| true | true |
Zhao, Jieyu and Wang, Tianlu and Yatskar, Mark and Ordonez, Vicente and Chang, Kai-Wei
| 2,018 | null | null | null | null |
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
|
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
|
http://arxiv.org/pdf/1804.06876v1
|
We introduce a new benchmark, WinoBias, for coreference resolution focused on
gender bias. Our corpus contains Winograd-schema style sentences with entities
corresponding to people referred by their occupation (e.g. the nurse, the
doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a
neural coreference system all link gendered pronouns to pro-stereotypical
entities with higher accuracy than anti-stereotypical entities, by an average
difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation
approach that, in combination with existing word-embedding debiasing
techniques, removes the bias demonstrated by these systems in WinoBias without
significantly affecting their performance on existing coreference benchmark
datasets. Our dataset and code are available at http://winobias.org.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
vanmassenhove2021neutral
|
\cite{vanmassenhove2021neutral}
|
NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic
Rewriting into Gender-Neutral Alternatives
|
http://arxiv.org/abs/2109.06105v1
|
Recent years have seen an increasing need for gender-neutral and inclusive
language. Within the field of NLP, there are various mono- and bilingual use
cases where gender inclusive language is appropriate, if not preferred due to
ambiguity or uncertainty in terms of the gender of referents. In this work, we
present a rule-based and a neural approach to gender-neutral rewriting for
English along with manually curated synthetic data (WinoBias+) and natural data
(OpenSubtitles and Reddit) benchmarks. A detailed manual and automatic
evaluation highlights how our NeuTral Rewriter, trained on data generated by
the rule-based approach, obtains word error rates (WER) below 0.18% on
synthetic, in-domain and out-domain test sets.
| true | true |
Vanmassenhove, Eva and Emmery, Chris and Shterionov, Dimitar
| 2,021 | null | null | null | null |
NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic
Rewriting into Gender-Neutral Alternatives
|
NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic ...
|
https://www.researchgate.net/publication/357122955_NeuTral_Rewriter_A_Rule-Based_and_Neural_Approach_to_Automatic_Rewriting_into_Gender_Neutral_Alternatives
|
Our work falls Round-trip translation (from gender-neural to gender-biased) and neural text paraphrasing German [18] Rule-based gender rewriting
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
rudinger2018gender
|
\cite{rudinger2018gender}
|
Gender Bias in Coreference Resolution
|
http://arxiv.org/abs/1804.09301v1
|
We present an empirical study of gender bias in coreference resolution
systems. We first introduce a novel, Winograd schema-style set of minimal pair
sentences that differ only by pronoun gender. With these "Winogender schemas,"
we evaluate and confirm systematic gender bias in three publicly-available
coreference resolution systems, and correlate this bias with real-world and
textual gender statistics.
| true | true |
Rudinger, Rachel and Naradowsky, Jason and Leonard, Brian and Van Durme, Benjamin
| 2,018 | null | null | null | null |
Gender Bias in Coreference Resolution
|
Gender Bias in Coreference Resolution
|
http://arxiv.org/pdf/1804.09301v1
|
We present an empirical study of gender bias in coreference resolution
systems. We first introduce a novel, Winograd schema-style set of minimal pair
sentences that differ only by pronoun gender. With these "Winogender schemas,"
we evaluate and confirm systematic gender bias in three publicly-available
coreference resolution systems, and correlate this bias with real-world and
textual gender statistics.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
srivastava2023beyond
|
\cite{srivastava2023beyond}
|
Beyond the Imitation Game: Quantifying and extrapolating the
capabilities of language models
|
http://arxiv.org/abs/2206.04615v3
|
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative
impact, these new capabilities are as yet poorly characterized. In order to
inform future research, prepare for disruptive new model capabilities, and
ameliorate socially harmful effects, it is vital that we understand the present
and near-future capabilities and limitations of language models. To address
this challenge, we introduce the Beyond the Imitation Game benchmark
(BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 450
authors across 132 institutions. Task topics are diverse, drawing problems from
linguistics, childhood development, math, common-sense reasoning, biology,
physics, social bias, software development, and beyond. BIG-bench focuses on
tasks that are believed to be beyond the capabilities of current language
models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense
transformer architectures, and Switch-style sparse transformers on BIG-bench,
across model sizes spanning millions to hundreds of billions of parameters. In
addition, a team of human expert raters performed all tasks in order to provide
a strong baseline. Findings include: model performance and calibration both
improve with scale, but are poor in absolute terms (and when compared with
rater performance); performance is remarkably similar across model classes,
though with benefits from sparsity; tasks that improve gradually and
predictably commonly involve a large knowledge or memorization component,
whereas tasks that exhibit "breakthrough" behavior at a critical scale often
involve multiple steps or components, or brittle metrics; social bias typically
increases with scale in settings with ambiguous context, but this can be
improved with prompting.
| true | true |
{BIG-bench authors}
| 2,023 | null | null | null |
TMLR
|
Beyond the Imitation Game: Quantifying and extrapolating the
capabilities of language models
|
Quantifying and extrapolating the capabilities of language models
|
https://openreview.net/forum?id=uyTL5Bvosj
|
The paper introduces the Beyond the Imitation Game benchmark (BIG-bench) as a way to better understand the current and near-future capabilities and limitations
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
dhamala2021bold
|
\cite{dhamala2021bold}
|
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language
Generation
|
http://arxiv.org/abs/2101.11718v1
|
Recent advances in deep learning techniques have enabled machines to generate
cohesive open-ended text when prompted with a sequence of words as context.
While these models now empower many downstream applications from conversation
bots to automatic storytelling, they have been shown to generate texts that
exhibit social biases. To systematically study and benchmark social biases in
open-ended language generation, we introduce the Bias in Open-Ended Language
Generation Dataset (BOLD), a large-scale dataset that consists of 23,679
English text generation prompts for bias benchmarking across five domains:
profession, gender, race, religion, and political ideology. We also propose new
automated metrics for toxicity, psycholinguistic norms, and text gender
polarity to measure social biases in open-ended text generation from multiple
angles. An examination of text generated from three popular language models
reveals that the majority of these models exhibit a larger social bias than
human-written Wikipedia text across all domains. With these results we
highlight the need to benchmark biases in open-ended language generation and
caution users of language generation models on downstream tasks to be cognizant
of these embedded prejudices.
| true | true |
Dhamala, Jwala and Sun, Tony and Kumar, Varun and Krishna, Satyapriya and Pruksachatkun, Yada and Chang, Kai-Wei and Gupta, Rahul
| 2,021 | null | null | null | null |
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language
Generation
|
Bias in Open-ended Language Generation Dataset (BOLD) - GitHub
|
https://github.com/amazon-science/bold
|
Bias in Open-ended Language Generation Dataset (BOLD) is a dataset to evaluate fairness in open-ended language generation in English language.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
kotek2023gender
|
\cite{kotek2023gender}
|
Gender bias and stereotypes in Large Language Models
|
http://arxiv.org/abs/2308.14921v1
|
Large Language Models (LLMs) have made substantial progress in the past
several months, shattering state-of-the-art benchmarks in many domains. This
paper investigates LLMs' behavior with respect to gender stereotypes, a known
issue for prior models. We use a simple paradigm to test the presence of gender
bias, building on but differing from WinoBias, a commonly used gender bias
dataset, which is likely to be included in the training data of current LLMs.
We test four recently published LLMs and demonstrate that they express biased
assumptions about men and women's occupations. Our contributions in this paper
are as follows: (a) LLMs are 3-6 times more likely to choose an occupation that
stereotypically aligns with a person's gender; (b) these choices align with
people's perceptions better than with the ground truth as reflected in official
job statistics; (c) LLMs in fact amplify the bias beyond what is reflected in
perceptions or the ground truth; (d) LLMs ignore crucial ambiguities in
sentence structure 95% of the time in our study items, but when explicitly
prompted, they recognize the ambiguity; (e) LLMs provide explanations for their
choices that are factually inaccurate and likely obscure the true reason behind
their predictions. That is, they provide rationalizations of their biased
behavior. This highlights a key property of these models: LLMs are trained on
imbalanced datasets; as such, even with the recent successes of reinforcement
learning with human feedback, they tend to reflect those imbalances back at us.
As with other types of societal biases, we suggest that LLMs must be carefully
tested to ensure that they treat minoritized individuals and communities
equitably.
| true | true |
Kotek, Hadas and Dockum, Rikker and Sun, David
| 2,023 | null | null | null | null |
Gender bias and stereotypes in Large Language Models
|
Gender bias and stereotypes in Large Language Models
|
http://arxiv.org/pdf/2308.14921v1
|
Large Language Models (LLMs) have made substantial progress in the past
several months, shattering state-of-the-art benchmarks in many domains. This
paper investigates LLMs' behavior with respect to gender stereotypes, a known
issue for prior models. We use a simple paradigm to test the presence of gender
bias, building on but differing from WinoBias, a commonly used gender bias
dataset, which is likely to be included in the training data of current LLMs.
We test four recently published LLMs and demonstrate that they express biased
assumptions about men and women's occupations. Our contributions in this paper
are as follows: (a) LLMs are 3-6 times more likely to choose an occupation that
stereotypically aligns with a person's gender; (b) these choices align with
people's perceptions better than with the ground truth as reflected in official
job statistics; (c) LLMs in fact amplify the bias beyond what is reflected in
perceptions or the ground truth; (d) LLMs ignore crucial ambiguities in
sentence structure 95% of the time in our study items, but when explicitly
prompted, they recognize the ambiguity; (e) LLMs provide explanations for their
choices that are factually inaccurate and likely obscure the true reason behind
their predictions. That is, they provide rationalizations of their biased
behavior. This highlights a key property of these models: LLMs are trained on
imbalanced datasets; as such, even with the recent successes of reinforcement
learning with human feedback, they tend to reflect those imbalances back at us.
As with other types of societal biases, we suggest that LLMs must be carefully
tested to ensure that they treat minoritized individuals and communities
equitably.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
parrish2021bbq
|
\cite{parrish2021bbq}
|
BBQ: A Hand-Built Bias Benchmark for Question Answering
|
http://arxiv.org/abs/2110.08193v2
|
It is well documented that NLP models learn social biases, but little work
has been done on how these biases manifest in model outputs for applied tasks
like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a
dataset of question sets constructed by the authors that highlight attested
social biases against people belonging to protected classes along nine social
dimensions relevant for U.S. English-speaking contexts. Our task evaluates
model responses at two levels: (i) given an under-informative context, we test
how strongly responses reflect social biases, and (ii) given an adequately
informative context, we test whether the model's biases override a correct
answer choice. We find that models often rely on stereotypes when the context
is under-informative, meaning the model's outputs consistently reproduce
harmful biases in this setting. Though models are more accurate when the
context provides an informative answer, they still rely on stereotypes and
average up to 3.4 percentage points higher accuracy when the correct answer
aligns with a social bias than when it conflicts, with this difference widening
to over 5 points on examples targeting gender for most models tested.
| true | true |
Parrish, Alicia and Chen, Angelica and Nangia, Nikita and Padmakumar, Vishakh and Phang, Jason and Thompson, Jana and Htut, Phu Mon and Bowman, Samuel R
| 2,021 | null | null | null | null |
BBQ: A Hand-Built Bias Benchmark for Question Answering
|
BBQ: A hand-built bias benchmark for question answering
|
https://aclanthology.org/2022.findings-acl.165/
|
by A Parrish · 2022 · Cited by 512 — We introduce the Bias Benchmark for QA (BBQ), a dataset of question-sets constructed by the authors that highlight attested social biases.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
webster-etal-2018-mind
|
\cite{webster-etal-2018-mind}
|
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
|
http://arxiv.org/abs/1810.05201v1
|
Coreference resolution is an important task for natural language
understanding, and the resolution of ambiguous pronouns a longstanding
challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in
sufficient volume or diversity to accurately indicate the practical utility of
models. Furthermore, we find gender bias in existing corpora and systems
favoring masculine entities. To address this, we present and release GAP, a
gender-balanced labeled corpus of 8,908 ambiguous pronoun-name pairs sampled to
provide diverse coverage of challenges posed by real-world text. We explore a
range of baselines which demonstrate the complexity of the challenge, the best
achieving just 66.9% F1. We show that syntactic structure and continuous neural
models provide promising, complementary cues for approaching the challenge.
| true | true |
Webster, Kellie and
Recasens, Marta and
Axelrod, Vera and
Baldridge, Jason
| 2,018 | null | null | null |
Transactions of the Association for Computational Linguistics
|
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
|
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
|
http://arxiv.org/pdf/1810.05201v1
|
Coreference resolution is an important task for natural language
understanding, and the resolution of ambiguous pronouns a longstanding
challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in
sufficient volume or diversity to accurately indicate the practical utility of
models. Furthermore, we find gender bias in existing corpora and systems
favoring masculine entities. To address this, we present and release GAP, a
gender-balanced labeled corpus of 8,908 ambiguous pronoun-name pairs sampled to
provide diverse coverage of challenges posed by real-world text. We explore a
range of baselines which demonstrate the complexity of the challenge, the best
achieving just 66.9% F1. We show that syntactic structure and continuous neural
models provide promising, complementary cues for approaching the challenge.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
pant-dadu-2022-incorporating
|
\cite{pant-dadu-2022-incorporating}
|
Incorporating Subjectivity into Gendered Ambiguous Pronoun ({GAP}) Resolution using Style Transfer
| null | null | true | false |
Pant, Kartikey and
Dadu, Tanvi
| 2,022 | null | null | null | null |
Incorporating Subjectivity into Gendered Ambiguous Pronoun ({GAP}) Resolution using Style Transfer
|
Incorporating Subjectivity into Gendered Ambiguous Pronoun (GAP ...
|
https://www.researchgate.net/publication/362266417_Incorporating_Subjectivity_into_Gendered_Ambiguous_Pronoun_GAP_Resolution_using_Style_Transfer
|
Incorporating Subjectivity into Gendered Ambiguous Pronoun (GAP) Resolution using Style Transfer ... GAP-Subjective is the same size as GAP, with 8,908 instances.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
levy-etal-2021-collecting-large
|
\cite{levy-etal-2021-collecting-large}
|
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution
and Machine Translation
|
http://arxiv.org/abs/2109.03858v2
|
Recent works have found evidence of gender bias in models of machine
translation and coreference resolution using mostly synthetic diagnostic
datasets. While these quantify bias in a controlled experiment, they often do
so on a small scale and consist mostly of artificial, out-of-distribution
sentences. In this work, we find grammatical patterns indicating stereotypical
and non-stereotypical gender-role assignments (e.g., female nurses versus male
dancers) in corpora from three domains, resulting in a first large-scale gender
bias dataset of 108K diverse real-world English sentences. We manually verify
the quality of our corpus and use it to evaluate gender bias in various
coreference resolution and machine translation models. We find that all tested
models tend to over-rely on gender stereotypes when presented with natural
inputs, which may be especially harmful when deployed in commercial systems.
Finally, we show that our dataset lends itself to finetuning a coreference
resolution model, finding it mitigates bias on a held out set. Our dataset and
models are publicly available at www.github.com/SLAB-NLP/BUG. We hope they will
spur future research into gender bias evaluation mitigation techniques in
realistic settings.
| true | true |
Levy, Shahar and
Lazar, Koren and
Stanovsky, Gabriel
| 2,021 | null | null | null | null |
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution
and Machine Translation
|
[PDF] Collecting a Large-Scale Gender Bias Dataset for Coreference ...
|
https://aclanthology.org/2021.findings-emnlp.211.pdf
|
We use BUG to evaluate gender bias in various coref- erence resolution and machine translation models, finding that models tend to make
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
gawlikowski2023survey
|
\cite{gawlikowski2023survey}
|
A Survey of Uncertainty in Deep Neural Networks
|
http://arxiv.org/abs/2107.03342v3
|
Due to their increasing spread, confidence in neural network predictions
became more and more important. However, basic neural networks do not deliver
certainty estimates or suffer from over or under confidence. Many researchers
have been working on understanding and quantifying uncertainty in a neural
network's prediction. As a result, different types and sources of uncertainty
have been identified and a variety of approaches to measure and quantify
uncertainty in neural networks have been proposed. This work gives a
comprehensive overview of uncertainty estimation in neural networks, reviews
recent advances in the field, highlights current challenges, and identifies
potential research opportunities. It is intended to give anyone interested in
uncertainty estimation in neural networks a broad overview and introduction,
without presupposing prior knowledge in this field. A comprehensive
introduction to the most crucial sources of uncertainty is given and their
separation into reducible model uncertainty and not reducible data uncertainty
is presented. The modeling of these uncertainties based on deterministic neural
networks, Bayesian neural networks, ensemble of neural networks, and test-time
data augmentation approaches is introduced and different branches of these
fields as well as the latest developments are discussed. For a practical
application, we discuss different measures of uncertainty, approaches for the
calibration of neural networks and give an overview of existing baselines and
implementations. Different examples from the wide spectrum of challenges in
different fields give an idea of the needs and challenges regarding
uncertainties in practical applications. Additionally, the practical
limitations of current methods for mission- and safety-critical real world
applications are discussed and an outlook on the next steps towards a broader
usage of such methods is given.
| true | true |
Gawlikowski, Jakob and Tassi, Cedrique Rovile Njieutcheu and Ali, Mohsin and Lee, Jongseok and Humt, Matthias and Feng, Jianxiang and Kruspe, Anna and Triebel, Rudolph and Jung, Peter and Roscher, Ribana and others
| 2,023 | null | null | null |
Artificial Intelligence Review
|
A Survey of Uncertainty in Deep Neural Networks
|
A Survey of Uncertainty in Deep Neural Networks
|
http://arxiv.org/pdf/2107.03342v3
|
Due to their increasing spread, confidence in neural network predictions
became more and more important. However, basic neural networks do not deliver
certainty estimates or suffer from over or under confidence. Many researchers
have been working on understanding and quantifying uncertainty in a neural
network's prediction. As a result, different types and sources of uncertainty
have been identified and a variety of approaches to measure and quantify
uncertainty in neural networks have been proposed. This work gives a
comprehensive overview of uncertainty estimation in neural networks, reviews
recent advances in the field, highlights current challenges, and identifies
potential research opportunities. It is intended to give anyone interested in
uncertainty estimation in neural networks a broad overview and introduction,
without presupposing prior knowledge in this field. A comprehensive
introduction to the most crucial sources of uncertainty is given and their
separation into reducible model uncertainty and not reducible data uncertainty
is presented. The modeling of these uncertainties based on deterministic neural
networks, Bayesian neural networks, ensemble of neural networks, and test-time
data augmentation approaches is introduced and different branches of these
fields as well as the latest developments are discussed. For a practical
application, we discuss different measures of uncertainty, approaches for the
calibration of neural networks and give an overview of existing baselines and
implementations. Different examples from the wide spectrum of challenges in
different fields give an idea of the needs and challenges regarding
uncertainties in practical applications. Additionally, the practical
limitations of current methods for mission- and safety-critical real world
applications are discussed and an outlook on the next steps towards a broader
usage of such methods is given.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
hu2023uncertainty
|
\cite{hu2023uncertainty}
|
Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications
|
http://arxiv.org/abs/2306.04459v1
|
As a main field of artificial intelligence, natural language processing (NLP)
has achieved remarkable success via deep neural networks. Plenty of NLP tasks
have been addressed in a unified manner, with various tasks being associated
with each other through sharing the same paradigm. However, neural networks are
black boxes and rely on probability computation. Making mistakes is inevitable.
Therefore, estimating the reliability and trustworthiness (in other words,
uncertainty) of neural networks becomes a key research direction, which plays a
crucial role in reducing models' risks and making better decisions. Therefore,
in this survey, we provide a comprehensive review of uncertainty-relevant works
in the NLP field. Considering the data and paradigms characteristics, we first
categorize the sources of uncertainty in natural language into three types,
including input, system, and output. Then, we systemically review uncertainty
quantification approaches and the main applications. Finally, we discuss the
challenges of uncertainty estimation in NLP and discuss potential future
directions, taking into account recent trends in the field. Though there have
been a few surveys about uncertainty estimation, our work is the first to
review uncertainty from the NLP perspective.
| true | true |
Hu, Mengting and Zhang, Zhen and Zhao, Shiwan and Huang, Minlie and Wu, Bingzhe
| 2,023 | null | null | null |
arXiv preprint arXiv:2306.04459
|
Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications
|
[PDF] Uncertainty in Natural Language Processing: Sources ... - arXiv
|
https://arxiv.org/pdf/2306.04459
|
Then, we systemically review uncertainty quantification approaches and the main applications. Finally, we discuss the challenges of uncertainty.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
huang2023look
|
\cite{huang2023look}
|
Look Before You Leap: An Exploratory Study of Uncertainty Measurement
for Large Language Models
|
http://arxiv.org/abs/2307.10236v4
|
The recent performance leap of Large Language Models (LLMs) opens up new
opportunities across numerous industrial applications and domains. However,
erroneous generations, such as false predictions, misinformation, and
hallucination made by LLMs, have also raised severe concerns for the
trustworthiness of LLMs', especially in safety-, security- and
reliability-sensitive scenarios, potentially hindering real-world adoptions.
While uncertainty estimation has shown its potential for interpreting the
prediction risks made by general machine learning (ML) models, little is known
about whether and to what extent it can help explore an LLM's capabilities and
counteract its undesired behavior. To bridge the gap, in this paper, we
initiate an exploratory study on the risk assessment of LLMs from the lens of
uncertainty. In particular, we experiment with twelve uncertainty estimation
methods and four LLMs on four prominent natural language processing (NLP) tasks
to investigate to what extent uncertainty estimation techniques could help
characterize the prediction risks of LLMs. Our findings validate the
effectiveness of uncertainty estimation for revealing LLMs'
uncertain/non-factual predictions. In addition to general NLP tasks, we
extensively conduct experiments with four LLMs for code generation on two
datasets. We find that uncertainty estimation can potentially uncover buggy
programs generated by LLMs. Insights from our study shed light on future design
and development for reliable LLMs, facilitating further research toward
enhancing the trustworthiness of LLMs.
| true | true |
Huang, Yuheng and Song, Jiayang and Wang, Zhijie and Zhao, Shengming and Chen, Huaming and Juefei-Xu, Felix and Ma, Lei
| 2,023 | null | null | null |
arXiv preprint arXiv:2307.10236
|
Look Before You Leap: An Exploratory Study of Uncertainty Measurement
for Large Language Models
|
Look Before You Leap: An Exploratory Study of Uncertainty ... - arXiv
|
https://arxiv.org/abs/2307.10236
|
The recent performance leap of Large Language Models (LLMs) opens up new opportunities across numerous industrial applications and domains.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
fadeeva2023lm
|
\cite{fadeeva2023lm}
|
LM-polygraph: Uncertainty estimation for language models
| null | null | true | false |
Fadeeva, Ekaterina and Vashurin, Roman and Tsvigun, Akim and Vazhentsev, Artem and Petrakov, Sergey and Fedyanin, Kirill and Vasilev, Daniil and Goncharova, Elizaveta and Panchenko, Alexander and Panov, Maxim and others
| 2,023 | null | null | null | null |
LM-polygraph: Uncertainty estimation for language models
|
LM-Polygraph: Uncertainty Estimation for Language Models
|
http://arxiv.org/pdf/2311.07383v1
|
Recent advancements in the capabilities of large language models (LLMs) have
paved the way for a myriad of groundbreaking applications in various fields.
However, a significant challenge arises as these models often "hallucinate",
i.e., fabricate facts without providing users an apparent means to discern the
veracity of their statements. Uncertainty estimation (UE) methods are one path
to safer, more responsible, and more effective use of LLMs. However, to date,
research on UE methods for LLMs has been focused primarily on theoretical
rather than engineering contributions. In this work, we tackle this issue by
introducing LM-Polygraph, a framework with implementations of a battery of
state-of-the-art UE methods for LLMs in text generation tasks, with unified
program interfaces in Python. Additionally, it introduces an extendable
benchmark for consistent evaluation of UE techniques by researchers, and a demo
web application that enriches the standard chat dialog with confidence scores,
empowering end-users to discern unreliable responses. LM-Polygraph is
compatible with the most recent LLMs, including BLOOMz, LLaMA-2, ChatGPT, and
GPT-4, and is designed to support future releases of similarly-styled LMs.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
kendall2017uncertainties
|
\cite{kendall2017uncertainties}
|
What Uncertainties Do We Need in Bayesian Deep Learning for Computer
Vision?
|
http://arxiv.org/abs/1703.04977v2
|
There are two major types of uncertainty one can model. Aleatoric uncertainty
captures noise inherent in the observations. On the other hand, epistemic
uncertainty accounts for uncertainty in the model -- uncertainty which can be
explained away given enough data. Traditionally it has been difficult to model
epistemic uncertainty in computer vision, but with new Bayesian deep learning
tools this is now possible. We study the benefits of modeling epistemic vs.
aleatoric uncertainty in Bayesian deep learning models for vision tasks. For
this we present a Bayesian deep learning framework combining input-dependent
aleatoric uncertainty together with epistemic uncertainty. We study models
under the framework with per-pixel semantic segmentation and depth regression
tasks. Further, our explicit uncertainty formulation leads to new loss
functions for these tasks, which can be interpreted as learned attenuation.
This makes the loss more robust to noisy data, also giving new state-of-the-art
results on segmentation and depth regression benchmarks.
| true | true |
Kendall, Alex and Gal, Yarin
| 2,017 | null | null | null |
NeurIPS
|
What Uncertainties Do We Need in Bayesian Deep Learning for Computer
Vision?
|
[PDF] What Uncertainties Do We Need in Bayesian Deep Learning ... - NIPS
|
http://papers.neurips.cc/paper/7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision.pdf
|
Quantifying uncertainty in computer vision applications can be largely divided into regression set- tings such as depth regression, and classification settings
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
bridle1990probabilistic
|
\cite{bridle1990probabilistic}
|
Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition
| null | null | true | false |
Bridle, John S
| 1,990 | null | null | null | null |
Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition
|
PROBABILISTIC INTERPRETATION OF FEEDFORWARD ...
|
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=818b3279ba393e0c0aeea200652199e8f4c59942
|
by M COSTA · Cited by 37 — J. S. Bridle 1989, \Probabilistic interpretation of feedforward classi cation network outputs, with rela- tionships to statistical pattern recognition," in Neu-.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
hendrycks2017a
|
\cite{hendrycks2017a}
|
A Baseline for Detecting Misclassified and Out-of-Distribution Examples
in Neural Networks
|
http://arxiv.org/abs/1610.02136v3
|
We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks.
| true | true |
Dan Hendrycks and Kevin Gimpel
| 2,017 | null | null | null | null |
A Baseline for Detecting Misclassified and Out-of-Distribution Examples
in Neural Networks
|
A Baseline for Detecting Misclassified and Out-of- ...
|
https://arxiv.org/abs/1610.02136
|
by D Hendrycks · 2016 · Cited by 4553 — We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
jurafsky2000speech
|
\cite{jurafsky2000speech}
|
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
| null | null | true | false |
Jurafsky, Daniel and Martin, James H
| 2,000 | null | null | null | null |
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
|
Speech and Language Processing: An Introduction to Natural ...
|
https://www.amazon.com/Speech-Language-Processing-Introduction-Computational/dp/0130950696
|
An introduction to natural language processing, computational linguistics and speech recognition. ISBN-13: 978-0130950697, ISBN-10: 0130950696.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
fomicheva2020unsupervised
|
\cite{fomicheva2020unsupervised}
|
Unsupervised Quality Estimation for Neural Machine Translation
|
http://arxiv.org/abs/2005.10608v2
|
Quality Estimation (QE) is an important component in making Machine
Translation (MT) useful in real-world applications, as it is aimed to inform
the user on the quality of the MT output at test time. Existing approaches
require large amounts of expert annotated data, computation and time for
training. As an alternative, we devise an unsupervised approach to QE where no
training or access to additional resources besides the MT system itself is
required. Different from most of the current work that treats the MT system as
a black box, we explore useful information that can be extracted from the MT
system as a by-product of translation. By employing methods for uncertainty
quantification, we achieve very good correlation with human judgments of
quality, rivalling state-of-the-art supervised QE models. To evaluate our
approach we collect the first dataset that enables work on both black-box and
glass-box approaches to QE.
| true | true |
Fomicheva, Marina and Sun, Shuo and Yankovskaya, Lisa and Blain, Fr{\'e}d{\'e}ric and Guzm{\'a}n, Francisco and Fishel, Mark and Aletras, Nikolaos and Chaudhary, Vishrav and Specia, Lucia
| 2,020 | null | null | null | null |
Unsupervised Quality Estimation for Neural Machine Translation
|
Unsupervised Quality Estimation for Neural Machine Translation
|
http://arxiv.org/pdf/2005.10608v2
|
Quality Estimation (QE) is an important component in making Machine
Translation (MT) useful in real-world applications, as it is aimed to inform
the user on the quality of the MT output at test time. Existing approaches
require large amounts of expert annotated data, computation and time for
training. As an alternative, we devise an unsupervised approach to QE where no
training or access to additional resources besides the MT system itself is
required. Different from most of the current work that treats the MT system as
a black box, we explore useful information that can be extracted from the MT
system as a by-product of translation. By employing methods for uncertainty
quantification, we achieve very good correlation with human judgments of
quality, rivalling state-of-the-art supervised QE models. To evaluate our
approach we collect the first dataset that enables work on both black-box and
glass-box approaches to QE.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
malinin2021uncertainty
|
\cite{malinin2021uncertainty}
|
Uncertainty Estimation in Autoregressive Structured Prediction
|
http://arxiv.org/abs/2002.07650v5
|
Uncertainty estimation is important for ensuring safety and robustness of AI
systems. While most research in the area has focused on un-structured
prediction tasks, limited work has investigated general uncertainty estimation
approaches for structured prediction. Thus, this work aims to investigate
uncertainty estimation for autoregressive structured prediction tasks within a
single unified and interpretable probabilistic ensemble-based framework. We
consider: uncertainty estimation for sequence data at the token-level and
complete sequence-level; interpretations for, and applications of, various
measures of uncertainty; and discuss both the theoretical and practical
challenges associated with obtaining them. This work also provides baselines
for token-level and sequence-level error detection, and sequence-level
out-of-domain input detection on the WMT'14 English-French and WMT'17
English-German translation and LibriSpeech speech recognition datasets.
| true | true |
Malinin, Andrey and Gales, Mark
| 2,021 | null | null | null | null |
Uncertainty Estimation in Autoregressive Structured Prediction
|
Uncertainty Estimation in Autoregressive Structured Prediction
|
http://arxiv.org/pdf/2002.07650v5
|
Uncertainty estimation is important for ensuring safety and robustness of AI
systems. While most research in the area has focused on un-structured
prediction tasks, limited work has investigated general uncertainty estimation
approaches for structured prediction. Thus, this work aims to investigate
uncertainty estimation for autoregressive structured prediction tasks within a
single unified and interpretable probabilistic ensemble-based framework. We
consider: uncertainty estimation for sequence data at the token-level and
complete sequence-level; interpretations for, and applications of, various
measures of uncertainty; and discuss both the theoretical and practical
challenges associated with obtaining them. This work also provides baselines
for token-level and sequence-level error detection, and sequence-level
out-of-domain input detection on the WMT'14 English-French and WMT'17
English-German translation and LibriSpeech speech recognition datasets.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
vovk2005algorithmic
|
\cite{vovk2005algorithmic}
|
Algorithmic learning in a random world
| null | null | true | false |
Vovk, Vladimir and Gammerman, Alexander and Shafer, Glenn
| 2,005 | null | null | null | null |
Algorithmic learning in a random world
|
Algorithmic Learning in a Random World
|
https://www.amazon.ca/Algorithmic-Learning-Random-World-Vladimir/dp/0387001522
|
Algorithmic Learning in a Random Worlddescribes recent theoretical and experimental developments in building computable approximations to Kolmogorov's
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
gal2016dropout
|
\cite{gal2016dropout}
|
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
Deep Learning
|
http://arxiv.org/abs/1506.02142v6
|
Deep learning tools have gained tremendous attention in applied machine
learning. However such tools for regression and classification do not capture
model uncertainty. In comparison, Bayesian models offer a mathematically
grounded framework to reason about model uncertainty, but usually come with a
prohibitive computational cost. In this paper we develop a new theoretical
framework casting dropout training in deep neural networks (NNs) as approximate
Bayesian inference in deep Gaussian processes. A direct result of this theory
gives us tools to model uncertainty with dropout NNs -- extracting information
from existing models that has been thrown away so far. This mitigates the
problem of representing uncertainty in deep learning without sacrificing either
computational complexity or test accuracy. We perform an extensive study of the
properties of dropout's uncertainty. Various network architectures and
non-linearities are assessed on tasks of regression and classification, using
MNIST as an example. We show a considerable improvement in predictive
log-likelihood and RMSE compared to existing state-of-the-art methods, and
finish by using dropout's uncertainty in deep reinforcement learning.
| true | true |
Gal, Yarin and Ghahramani, Zoubin
| 2,016 | null | null | null | null |
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
Deep Learning
|
Representing Model Uncertainty in Deep Learning - arXiv
|
https://arxiv.org/abs/1506.02142
|
In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
yu2022learning
|
\cite{yu2022learning}
|
Learning Uncertainty for Unknown Domains with Zero-Target-Assumption
| null | null | true | false |
Yu, Yu and Sajjad, Hassan and Xu, Jia
| 2,022 | null | null | null | null |
Learning Uncertainty for Unknown Domains with Zero-Target-Assumption
|
Learning Uncertainty for Unknown Domains with Zero-Target ...
|
https://openreview.net/forum?id=pWVASryOyFw
|
In this paper, the authors propose to use a Maximum-Entropy Rewarded Reinforcement Learning framework to select training data for NLP tasks, the goal of which is to maximize generalization. Weaknesses: The authors only proved the role of entropy in selecting data, but this paper does not elaborate on the motivation and advantages of introducing complex reinforcement learning to train a policy network. 1. “The authors only proved the role of entropy in selecting data, but this paper does not elaborate on the motivation and advantages of introducing complex reinforcement learning to train a policy network.” This paper proposes a method for optimal training set selection with the goal of maximizing generalization to multiple unknown target domains for NLP tasks.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
kuhn2023semantic
|
\cite{kuhn2023semantic}
|
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation
in Natural Language Generation
|
http://arxiv.org/abs/2302.09664v3
|
We introduce a method to measure uncertainty in large language models. For
tasks like question answering, it is essential to know when we can trust the
natural language outputs of foundation models. We show that measuring
uncertainty in natural language is challenging because of "semantic
equivalence" -- different sentences can mean the same thing. To overcome these
challenges we introduce semantic entropy -- an entropy which incorporates
linguistic invariances created by shared meanings. Our method is unsupervised,
uses only a single model, and requires no modifications to off-the-shelf
language models. In comprehensive ablation studies we show that the semantic
entropy is more predictive of model accuracy on question answering data sets
than comparable baselines.
| true | true |
Kuhn, Lorenz and Gal, Yarin and Farquhar, Sebastian
| 2,023 | null | null | null | null |
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation
in Natural Language Generation
|
Semantic Uncertainty: Linguistic Invariances for ... - OpenReview
|
https://openreview.net/forum?id=VD-AYtP0dve
|
Summary: The paper proposes an approach called semantic entropy, which incorporates linguistic invariances for uncertainty estimation in NLG.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
duan2023shifting
|
\cite{duan2023shifting}
|
Shifting Attention to Relevance: Towards the Predictive Uncertainty
Quantification of Free-Form Large Language Models
|
http://arxiv.org/abs/2307.01379v3
|
Large Language Models (LLMs) show promising results in language generation
and instruction following but frequently "hallucinate", making their outputs
less reliable. Despite Uncertainty Quantification's (UQ) potential solutions,
implementing it accurately within LLMs is challenging. Our research introduces
a simple heuristic: not all tokens in auto-regressive LLM text equally
represent the underlying meaning, as "linguistic redundancy" often allows a few
keywords to convey the essence of long sentences. However, current methods
underestimate this inequality when assessing uncertainty, causing tokens with
limited semantics to be equally or excessively weighted in UQ. To correct this,
we propose Shifting Attention to more Relevant (SAR) components at both token-
and sentence-levels for better UQ. We conduct extensive experiments involving a
range of popular "off-the-shelf" LLMs, such as Vicuna, WizardLM, and
LLaMA-2-chat, with model sizes extending up to 33B parameters. We evaluate
various free-form question-answering tasks, encompassing domains such as
reading comprehension, science Q&A, and medical Q&A. Our experimental results,
coupled with a comprehensive demographic analysis, demonstrate the superior
performance of SAR. The code is available at https://github.com/jinhaoduan/SAR.
| true | true |
Duan, Jinhao and Cheng, Hao and Wang, Shiqi and Wang, Chenan and Zavalny, Alex and Xu, Renjing and Kailkhura, Bhavya and Xu, Kaidi
| 2,024 | null | null | null | null |
Shifting Attention to Relevance: Towards the Predictive Uncertainty
Quantification of Free-Form Large Language Models
|
Shifting Attention to Relevance: Towards the Predictive ...
|
https://arxiv.org/abs/2307.01379
|
by J Duan · 2023 · Cited by 172 — Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models. Authors:Jinhao Duan, Hao
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
kadavath2022language
|
\cite{kadavath2022language}
|
Language Models (Mostly) Know What They Know
|
http://arxiv.org/abs/2207.05221v4
|
We study whether language models can evaluate the validity of their own
claims and predict which questions they will be able to answer correctly. We
first show that larger models are well-calibrated on diverse multiple choice
and true/false questions when they are provided in the right format. Thus we
can approach self-evaluation on open-ended sampling tasks by asking models to
first propose answers, and then to evaluate the probability "P(True)" that
their answers are correct. We find encouraging performance, calibration, and
scaling for P(True) on a diverse array of tasks. Performance at self-evaluation
further improves when we allow models to consider many of their own samples
before predicting the validity of one specific possibility. Next, we
investigate whether models can be trained to predict "P(IK)", the probability
that "I know" the answer to a question, without reference to any particular
proposed answer. Models perform well at predicting P(IK) and partially
generalize across tasks, though they struggle with calibration of P(IK) on new
tasks. The predicted P(IK) probabilities also increase appropriately in the
presence of relevant source materials in the context, and in the presence of
hints towards the solution of mathematical word problems. We hope these
observations lay the groundwork for training more honest models, and for
investigating how honesty generalizes to cases where models are trained on
objectives other than the imitation of human writing.
| true | true |
Kadavath, Saurav and Conerly, Tom and Askell, Amanda and Henighan, Tom and Drain, Dawn and Perez, Ethan and Schiefer, Nicholas and Hatfield-Dodds, Zac and DasSarma, Nova and Tran-Johnson, Eli and others
| 2,022 | null | null | null |
arXiv preprint arXiv:2207.05221
|
Language Models (Mostly) Know What They Know
|
Language Models (Mostly) Know What They Know
|
http://arxiv.org/pdf/2207.05221v4
|
We study whether language models can evaluate the validity of their own
claims and predict which questions they will be able to answer correctly. We
first show that larger models are well-calibrated on diverse multiple choice
and true/false questions when they are provided in the right format. Thus we
can approach self-evaluation on open-ended sampling tasks by asking models to
first propose answers, and then to evaluate the probability "P(True)" that
their answers are correct. We find encouraging performance, calibration, and
scaling for P(True) on a diverse array of tasks. Performance at self-evaluation
further improves when we allow models to consider many of their own samples
before predicting the validity of one specific possibility. Next, we
investigate whether models can be trained to predict "P(IK)", the probability
that "I know" the answer to a question, without reference to any particular
proposed answer. Models perform well at predicting P(IK) and partially
generalize across tasks, though they struggle with calibration of P(IK) on new
tasks. The predicted P(IK) probabilities also increase appropriately in the
presence of relevant source materials in the context, and in the presence of
hints towards the solution of mathematical word problems. We hope these
observations lay the groundwork for training more honest models, and for
investigating how honesty generalizes to cases where models are trained on
objectives other than the imitation of human writing.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
malinin2018predictive
|
\cite{malinin2018predictive}
|
Predictive Uncertainty Estimation via Prior Networks
|
http://arxiv.org/abs/1802.10501v4
|
Estimating how uncertain an AI system is in its predictions is important to
improve the safety of such systems. Uncertainty in predictive can result from
uncertainty in model parameters, irreducible data uncertainty and uncertainty
due to distributional mismatch between the test and training data
distributions. Different actions might be taken depending on the source of the
uncertainty so it is important to be able to distinguish between them.
Recently, baseline tasks and metrics have been defined and several practical
methods to estimate uncertainty developed. These methods, however, attempt to
model uncertainty due to distributional mismatch either implicitly through
model uncertainty or as data uncertainty. This work proposes a new framework
for modeling predictive uncertainty called Prior Networks (PNs) which
explicitly models distributional uncertainty. PNs do this by parameterizing a
prior distribution over predictive distributions. This work focuses on
uncertainty for classification and evaluates PNs on the tasks of identifying
out-of-distribution (OOD) samples and detecting misclassification on the MNIST
dataset, where they are found to outperform previous methods. Experiments on
synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian
methods PNs are able to distinguish between data and distributional
uncertainty.
| true | true |
Malinin, Andrey and Gales, Mark
| 2,018 | null | null | null | null |
Predictive Uncertainty Estimation via Prior Networks
|
Predictive Uncertainty Estimation via Prior Networks
|
http://arxiv.org/pdf/1802.10501v4
|
Estimating how uncertain an AI system is in its predictions is important to
improve the safety of such systems. Uncertainty in predictive can result from
uncertainty in model parameters, irreducible data uncertainty and uncertainty
due to distributional mismatch between the test and training data
distributions. Different actions might be taken depending on the source of the
uncertainty so it is important to be able to distinguish between them.
Recently, baseline tasks and metrics have been defined and several practical
methods to estimate uncertainty developed. These methods, however, attempt to
model uncertainty due to distributional mismatch either implicitly through
model uncertainty or as data uncertainty. This work proposes a new framework
for modeling predictive uncertainty called Prior Networks (PNs) which
explicitly models distributional uncertainty. PNs do this by parameterizing a
prior distribution over predictive distributions. This work focuses on
uncertainty for classification and evaluates PNs on the tasks of identifying
out-of-distribution (OOD) samples and detecting misclassification on the MNIST
dataset, where they are found to outperform previous methods. Experiments on
synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian
methods PNs are able to distinguish between data and distributional
uncertainty.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
darrin2022rainproof
|
\cite{darrin2022rainproof}
|
Rainproof: An Umbrella To Shield Text Generators From
Out-Of-Distribution Data
|
http://arxiv.org/abs/2212.09171v2
|
Implementing effective control mechanisms to ensure the proper functioning
and security of deployed NLP models, from translation to chatbots, is
essential. A key ingredient to ensure safe system behaviour is
Out-Of-Distribution (OOD) detection, which aims to detect whether an input
sample is statistically far from the training distribution. Although OOD
detection is a widely covered topic in classification tasks, most methods rely
on hidden features output by the encoder. In this work, we focus on leveraging
soft-probabilities in a black-box framework, i.e. we can access the
soft-predictions but not the internal states of the model. Our contributions
include: (i) RAINPROOF a Relative informAItioN Projection OOD detection
framework; and (ii) a more operational evaluation setting for OOD detection.
Surprisingly, we find that OOD detection is not necessarily aligned with
task-specific measures. The OOD detector may filter out samples well processed
by the model and keep samples that are not, leading to weaker performance. Our
results show that RAINPROOF provides OOD detection methods more aligned with
task-specific performance metrics than traditional OOD detectors.
| true | true |
Darrin, Maxime and Piantanida, Pablo and Colombo, Pierre
| 2,023 | null | null | null | null |
Rainproof: An Umbrella To Shield Text Generators From
Out-Of-Distribution Data
|
RAINPROOF: An umbrella to shield text generators from ...
|
https://aclanthology.org/2023.emnlp-main.357.pdf
|
by M Darrin · 2023 · Cited by 39 — RAINPROOF is a Relative informAItioN Projection OOD detection framework that shields text generators from out-of-distribution data, using soft-
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
vashurin2025benchmarking
|
\cite{vashurin2025benchmarking}
|
Benchmarking uncertainty quantification methods for large language models with lm-polygraph
| null | null | true | false |
Vashurin, Roman and Fadeeva, Ekaterina and Vazhentsev, Artem and Rvanova, Lyudmila and Vasilev, Daniil and Tsvigun, Akim and Petrakov, Sergey and Xing, Rui and Sadallah, Abdelrahman and Grishchenkov, Kirill and others
| 2,025 | null | null | null |
Transactions of the Association for Computational Linguistics
|
Benchmarking uncertainty quantification methods for large language models with lm-polygraph
|
Benchmarking Uncertainty Quantification Methods for Large ...
|
https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00737/128713/Benchmarking-Uncertainty-Quantification-Methods
|
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph | Transactions of the Association for Computational Linguistics | MIT Press Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev, Lyudmila Rvanova, Daniil Vasilev, Akim Tsvigun, Sergey Petrakov, Rui Xing, Abdelrahman Sadallah, Kirill Grishchenkov, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov, Artem Shelmanov; Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph. We propose a new comprehensive benchmark for the evaluation of UQ and uncertainty normalization methods for LLMs. The benchmark can assess the calibration of uncertainty scores and their effectiveness in selective QA/generation and claim-level fact-checking (hallucination detection).1 PRR↑ 50% with various generation quality metrics for UQ methods in selective QA tasks with the Stable LM 2 12B model.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
santilli2024spurious
|
\cite{santilli2024spurious}
|
On a spurious interaction between uncertainty scores and answer evaluation metrics in generative qa tasks
| null | null | true | false |
Santilli, Andrea and Xiong, Miao and Kirchhof, Michael and Rodriguez, Pau and Danieli, Federico and Suau, Xavier and Zappella, Luca and Williamson, Sinead and Golinski, Adam
| 2,024 | null | null | null | null |
On a spurious interaction between uncertainty scores and answer evaluation metrics in generative qa tasks
|
On a Spurious Interaction between Uncertainty Scores & ...
|
https://openreview.net/pdf?id=jGtL0JFdeD
|
by A Santilli · Cited by 3 — In this paper, we highlight that some UQ methods and answer evaluation metrics are spuriously correlated via the response length, which leads to falsely
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
santilli2025revisiting
|
\cite{santilli2025revisiting}
|
Revisiting Uncertainty Quantification Evaluation in Language Models:
Spurious Interactions with Response Length Bias Results
|
http://arxiv.org/abs/2504.13677v2
|
Uncertainty Quantification (UQ) in Language Models (LMs) is key to improving
their safety and reliability. Evaluations often use metrics like AUROC to
assess how well UQ methods (e.g., negative sequence probabilities) correlate
with task correctness functions (e.g., ROUGE-L). We show that mutual
biases--when both UQ methods and correctness functions are biased by the same
factors--systematically distort evaluation. First, we formally prove that any
mutual bias non-randomly skews AUROC rankings, compromising benchmark
integrity. Second, we confirm this happens empirically by testing 7 widely used
correctness functions, from lexical-based and embedding-based metrics to
LM-as-a-judge approaches, across 4 datasets x 4 models x 8 UQ methods. Our
analysis shows that length biases in correctness functions distort UQ
assessments by interacting with length biases in UQ methods. We identify
LM-as-a-judge methods as the least length-biased, offering a promising path for
a fairer UQ evaluation.
| true | true |
Santilli, Andrea and Golinski, Adam and Kirchhof, Michael and Danieli, Federico and Blaas, Arno and Xiong, Miao and Zappella, Luca and Williamson, Sinead
| 2,025 | null | null | null |
arXiv preprint arXiv:2504.13677
|
Revisiting Uncertainty Quantification Evaluation in Language Models:
Spurious Interactions with Response Length Bias Results
|
Spurious Interactions with Response Length Bias Results
|
https://arxiv.org/pdf/2504.13677?
|
by A Santilli · 2025 · Cited by 3 — Uncertainty Quantification (UQ) in Language. Models (LMs) is key to improving their safety and reliability. Evaluations often use metrics.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
mehta2024evaluating
|
\cite{mehta2024evaluating}
|
Evaluating the Fairness of Deep Learning Uncertainty Estimates in
Medical Image Analysis
|
http://arxiv.org/abs/2303.03242v1
|
Although deep learning (DL) models have shown great success in many medical
image analysis tasks, deployment of the resulting models into real clinical
contexts requires: (1) that they exhibit robustness and fairness across
different sub-populations, and (2) that the confidence in DL model predictions
be accurately expressed in the form of uncertainties. Unfortunately, recent
studies have indeed shown significant biases in DL models across demographic
subgroups (e.g., race, sex, age) in the context of medical image analysis,
indicating a lack of fairness in the models. Although several methods have been
proposed in the ML literature to mitigate a lack of fairness in DL models, they
focus entirely on the absolute performance between groups without considering
their effect on uncertainty estimation. In this work, we present the first
exploration of the effect of popular fairness models on overcoming biases
across subgroups in medical image analysis in terms of bottom-line performance,
and their effects on uncertainty quantification. We perform extensive
experiments on three different clinically relevant tasks: (i) skin lesion
classification, (ii) brain tumour segmentation, and (iii) Alzheimer's disease
clinical score regression. Our results indicate that popular ML methods, such
as data-balancing and distributionally robust optimization, succeed in
mitigating fairness issues in terms of the model performances for some of the
tasks. However, this can come at the cost of poor uncertainty estimates
associated with the model predictions. This tradeoff must be mitigated if
fairness models are to be adopted in medical image analysis.
| true | true |
Mehta, Raghav and Shui, Changjian and Arbel, Tal
| 2,024 | null | null | null | null |
Evaluating the Fairness of Deep Learning Uncertainty Estimates in
Medical Image Analysis
|
Evaluating the Fairness of Deep Learning Uncertainty Estimates in ...
|
https://arxiv.org/abs/2303.03242
|
In this work, we present the first exploration of the effect of popular fairness models on overcoming biases across subgroups in medical image analysis.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
kuzmin-etal-2023-uncertainty
|
\cite{kuzmin-etal-2023-uncertainty}
|
Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?
| null | null | true | false |
Kuzmin, Gleb and
Vazhentsev, Artem and
Shelmanov, Artem and
Han, Xudong and
Suster, Simon and
Panov, Maxim and
Panchenko, Alexander and
Baldwin, Timothy
| 2,023 | null |
https://aclanthology.org/2023.ijcnlp-main.48/
|
10.18653/v1/2023.ijcnlp-main.48
| null |
Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?
|
Uncertainty Estimation for Debiased Models: Does Fairness Hurt ...
|
https://aclanthology.org/2023.ijcnlp-main.48/
|
Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?. In Proceedings of the 13th International Joint Conference on Natural Language
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
kuzucu2023uncertainty
|
\cite{kuzucu2023uncertainty}
|
Uncertainty as a Fairness Measure
| null | null | true | false |
Kuzucu, Selim and Cheong, Jiaee and Gunes, Hatice and Kalkan, Sinan
| 2,023 | null | null | null |
arXiv preprint arXiv:2312.11299
|
Uncertainty as a Fairness Measure
|
[2312.11299] Uncertainty-based Fairness Measures - arXiv
|
https://arxiv.org/abs/2312.11299
|
We introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
kaiser2022uncertainty
|
\cite{kaiser2022uncertainty}
|
Uncertainty-aware predictive modeling for fair data-driven decisions
| null | null | true | false |
Kaiser, Patrick and Kern, Christoph and R{\"u}gamer, David
| 2,022 | null | null | null |
arXiv preprint arXiv:2211.02730
|
Uncertainty-aware predictive modeling for fair data-driven decisions
|
Uncertainty-aware predictive modeling for fair data-driven ...
|
https://openreview.net/forum?id=8DXj-ze0x_s
|
Uncertainty-aware predictive modeling for fair data-driven decisions | OpenReview Blind Submission by TSRML • Uncertainty-aware predictive modeling for fair data-driven decisions 23 Oct 2022, 01:52 NeurIPS 2022 Workshop TSRML Paper72 Decision Readers: EveryoneShow Revisions The authors highlight the importance of accounting uncertainty in automated decision-making (ADM) systems in order to further promote fairness and propose the use of the reject option in ADM, which is triggered when the level of uncertainty is above a certain threshold. This paper intends to develop a fair decision-making system leveraging a distributional prediction model and a distribution-aware decision-making module. This paper connects uncertainty with fairness in automated decision-making systems. 2. This paper indeed failed to propose any new uncertainty quantification method that is designed for the decision task.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
tahir2023fairness
|
\cite{tahir2023fairness}
|
Fairness through Aleatoric Uncertainty
|
http://arxiv.org/abs/2304.03646v2
|
We propose a simple yet effective solution to tackle the often-competing
goals of fairness and utility in classification tasks. While fairness ensures
that the model's predictions are unbiased and do not discriminate against any
particular group or individual, utility focuses on maximizing the model's
predictive performance. This work introduces the idea of leveraging aleatoric
uncertainty (e.g., data ambiguity) to improve the fairness-utility trade-off.
Our central hypothesis is that aleatoric uncertainty is a key factor for
algorithmic fairness and samples with low aleatoric uncertainty are modeled
more accurately and fairly than those with high aleatoric uncertainty. We then
propose a principled model to improve fairness when aleatoric uncertainty is
high and improve utility elsewhere. Our approach first intervenes in the data
distribution to better decouple aleatoric uncertainty and epistemic
uncertainty. It then introduces a fairness-utility bi-objective loss defined
based on the estimated aleatoric uncertainty. Our approach is theoretically
guaranteed to improve the fairness-utility trade-off. Experimental results on
both tabular and image datasets show that the proposed approach outperforms
state-of-the-art methods w.r.t. the fairness-utility trade-off and w.r.t. both
group and individual fairness metrics. This work presents a fresh perspective
on the trade-off between utility and algorithmic fairness and opens a key
avenue for the potential of using prediction uncertainty in fair machine
learning.
| true | true |
Tahir, Anique and Cheng, Lu and Liu, Huan
| 2,023 | null | null | null | null |
Fairness through Aleatoric Uncertainty
|
Fairness through Aleatoric Uncertainty
|
http://arxiv.org/pdf/2304.03646v2
|
We propose a simple yet effective solution to tackle the often-competing
goals of fairness and utility in classification tasks. While fairness ensures
that the model's predictions are unbiased and do not discriminate against any
particular group or individual, utility focuses on maximizing the model's
predictive performance. This work introduces the idea of leveraging aleatoric
uncertainty (e.g., data ambiguity) to improve the fairness-utility trade-off.
Our central hypothesis is that aleatoric uncertainty is a key factor for
algorithmic fairness and samples with low aleatoric uncertainty are modeled
more accurately and fairly than those with high aleatoric uncertainty. We then
propose a principled model to improve fairness when aleatoric uncertainty is
high and improve utility elsewhere. Our approach first intervenes in the data
distribution to better decouple aleatoric uncertainty and epistemic
uncertainty. It then introduces a fairness-utility bi-objective loss defined
based on the estimated aleatoric uncertainty. Our approach is theoretically
guaranteed to improve the fairness-utility trade-off. Experimental results on
both tabular and image datasets show that the proposed approach outperforms
state-of-the-art methods w.r.t. the fairness-utility trade-off and w.r.t. both
group and individual fairness metrics. This work presents a fresh perspective
on the trade-off between utility and algorithmic fairness and opens a key
avenue for the potential of using prediction uncertainty in fair machine
learning.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Mcal
|
\cite{Mcal}
|
Synthetic quantitative MRI through relaxometry modelling
| null | null | true | false |
Callaghan, Martina F. and Mohammadi, Siawoosh and Weiskopf, Nikolaus
| 2,016 | null |
https://dx.doi.org/10.1002/nbm.3658
|
10.1002/nbm.3658
|
NMR in Biomedicine
|
Synthetic quantitative MRI through relaxometry modelling
|
Synthetic quantitative MRI through relaxometry modelling - PMC
|
https://pmc.ncbi.nlm.nih.gov/articles/PMC5132086/
|
The proposed synthetic qMRI approach shows promise for furthering our understanding of the inter‐relation of MRI parameters and for maximizing
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Jand
|
\cite{Jand}
|
Synthetic MRI for stroke: a qualitative and quantitative pilot study
| null | null | true | false |
André, Joachim and Barrit, Sami and Jissendi, Patrice
| 2,022 | null | null |
10.1038/s41598-022-15204-8
|
Scientific Reports
|
Synthetic MRI for stroke: a qualitative and quantitative pilot study
|
(PDF) Synthetic MRI for stroke: a qualitative and quantitative pilot study
|
https://www.researchgate.net/publication/361826097_Synthetic_MRI_for_stroke_a_qualitative_and_quantitative_pilot_study
|
Synthetic MR provides qualitative and quantitative multi-parametric data about tissue properties. in a single acquisition. Its use in stroke imaging is not
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Emoy
|
\cite{Emoy}
|
A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data
| null | null | true | false |
Moya-Sáez, Elisa and Peña-Nogales, Óscar and Luis-García, Rodrigo de and Alberola-López, Carlos
| 2,021 | null |
https://www.sciencedirect.com/science/article/pii/S0169260721004454
|
https://doi.org/10.1016/j.cmpb.2021.106371
|
Computer Methods and Programs in Biomedicine
|
A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data
|
A deep learning approach for synthetic MRI based on two routine ...
|
https://pubmed.ncbi.nlm.nih.gov/34525411/
|
**Conclusions:** These results show that our approach is able to provide realistic parametric maps and weighted images out of a CNN that (a) is trained with a synthetic dataset and (b) needs only two inputs, which are in turn obtained from a common full-brain acquisition that takes less than 8 min of scan time. * Brain tumor enhancement prediction from pre-contrast conventional weighted images using synthetic multiparametric mapping and generative artificial intelligence.Moya-Sáez E, de Luis-García R, Nunez-Gonzalez L, Alberola-López C, Hernández-Tamames JA.Moya-Sáez E, et al.Quant Imaging Med Surg. * Deep-Learning-Based Contrast Synthesis From MRF Parameter Maps in the Knee Joint.Nykänen O, Nevalainen M, Casula V, Isosalo A, Inkinen SI, Nikki M, Lattanzi R, Cloos MA, Nissi MJ, Nieminen MT.Nykänen O, et al.J Magn Reson Imaging.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Kgop
|
\cite{Kgop}
|
Synthetic data in generalizable, learning-based neuroimaging
| null | null | true | false |
Gopinath, Karthik and Hoopes, Andrew and Alexander, Daniel C. and Arnold, Steven E. and Balbastre, Yael and Billot, Benjamin and Casamitjana, Adrià and Cheng, You and Chua, Russ Yue Zhi and Edlow, Brian L. and Fischl, Bruce and Gazula, Harshvardhan and Hoffmann, Malte and Keene, C. Dirk and Kim, Seunghoi and Kimberly, W. Taylor and Laguna, Sonia and Larson, Kathleen E. and Van Leemput, Koen and Puonti, Oula and Rodrigues, Livia M. and Rosen, Matthew S. and Tregidgo, Henry F. J. and Varadarajan, Divya and Young, Sean I. and Dalca, Adrian V. and Iglesias, Juan Eugenio
| 2,024 |
11
|
https://doi.org/10.1162/imag\_a\_00337
|
10.1162/imag_a_00337
|
Imaging Neuroscience
|
Synthetic data in generalizable, learning-based neuroimaging
|
Synthetic data in generalizable, learning-based ...
|
https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00337/124867/Synthetic-data-in-generalizable-learning-based
|
by K Gopinath · 2024 · Cited by 17 — Synthetic data have emerged as an attractive option for developing machine-learning methods in human neuroimaging, particularly in magnetic resonance imaging (
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Jigl
|
\cite{Jigl}
|
SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry
| null | null | true | false |
Juan E. Iglesias and Benjamin Billot and Yaël Balbastre and Colin Magdamo and Steven E. Arnold and Sudeshna Das and Brian L. Edlow and Daniel C. Alexander and Polina Golland and Bruce Fischl
| 2,023 | null |
https://www.science.org/doi/abs/10.1126/sciadv.add3607
|
10.1126/sciadv.add3607
|
Science Advances
|
SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry
|
SynthSR: A public AI tool to turn heterogeneous clinical brain scans ...
|
https://pubmed.ncbi.nlm.nih.gov/36724222/
|
Missing: 04/08/2025
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
jwil
|
\cite{jwil}
|
Limits of Transfer Learning
|
http://arxiv.org/abs/2006.12694v1
|
Transfer learning involves taking information and insight from one problem
domain and applying it to a new problem domain. Although widely used in
practice, theory for transfer learning remains less well-developed. To address
this, we prove several novel results related to transfer learning, showing the
need to carefully select which sets of information to transfer and the need for
dependence between transferred information and target problems. Furthermore, we
prove how the degree of probabilistic change in an algorithm using transfer
learning places an upper bound on the amount of improvement possible. These
results build on the algorithmic search framework for machine learning,
allowing the results to apply to a wide range of learning problems using
transfer.
| true | true |
Jake Williams and Abel Tadesse and Tyler Sam and Huey Sun and George D. Montanez
| 2,020 | null |
https://arxiv.org/abs/2006.12694
| null | null |
Limits of Transfer Learning
|
Limits of Transfer Learning
|
http://arxiv.org/pdf/2006.12694v1
|
Transfer learning involves taking information and insight from one problem
domain and applying it to a new problem domain. Although widely used in
practice, theory for transfer learning remains less well-developed. To address
this, we prove several novel results related to transfer learning, showing the
need to carefully select which sets of information to transfer and the need for
dependence between transferred information and target problems. Furthermore, we
prove how the degree of probabilistic change in an algorithm using transfer
learning places an upper bound on the amount of improvement possible. These
results build on the algorithmic search framework for machine learning,
allowing the results to apply to a wide range of learning problems using
transfer.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
weli
|
\cite{weli}
|
Detecting Alzheimer's Disease on Small Dataset: A Knowledge Transfer Perspective
| null | null | true | false |
Li, Wei and Zhao, Yifei and Chen, Xi and Xiao, Yang and Qin, Yuanyuan
| 2,019 | null | null |
10.1109/JBHI.2018.2839771
|
IEEE Journal of Biomedical and Health Informatics
|
Detecting Alzheimer's Disease on Small Dataset: A Knowledge Transfer Perspective
|
Detecting Alzheimer's Disease on Small Dataset
|
http://ieeexplore.ieee.org/document/8362917/
|
In addition, we proposed an effective knowledge transfer method to diminish the disparity among different datasets and improve the
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
jval
|
\cite{jval}
|
Transfer Learning in Magnetic Resonance Brain Imaging: a Systematic
Review
|
http://arxiv.org/abs/2102.01530v2
|
Transfer learning refers to machine learning techniques that focus on
acquiring knowledge from related tasks to improve generalization in the tasks
of interest. In MRI, transfer learning is important for developing strategies
that address the variation in MR images. Additionally, transfer learning is
beneficial to re-utilize machine learning models that were trained to solve
related tasks to the task of interest. Our goal is to identify research
directions, gaps of knowledge, applications, and widely used strategies among
the transfer learning approaches applied in MR brain imaging. We performed a
systematic literature search for articles that applied transfer learning to MR
brain imaging. We screened 433 studies and we categorized and extracted
relevant information, including task type, application, and machine learning
methods. Furthermore, we closely examined brain MRI-specific transfer learning
approaches and other methods that tackled privacy, unseen target domains, and
unlabeled data. We found 129 articles that applied transfer learning to brain
MRI tasks. The most frequent applications were dementia related classification
tasks and brain tumor segmentation. A majority of articles utilized transfer
learning on convolutional neural networks (CNNs). Only few approaches were
clearly brain MRI specific, considered privacy issues, unseen target domains or
unlabeled data. We proposed a new categorization to group specific, widely-used
approaches. There is an increasing interest in transfer learning within brain
MRI. Public datasets have contributed to the popularity of Alzheimer's
diagnostics/prognostics and tumor segmentation. Likewise, the availability of
pretrained CNNs has promoted their utilization. Finally, the majority of the
surveyed studies did not examine in detail the interpretation of their
strategies after applying transfer learning, and did not compare to other
approaches.
| true | true |
Valverde, Juan Miguel and Imani, Vandad and Abdollahzadeh, Ali and De Feo, Riccardo and Prakash, Mithilesh and Ciszek, Robert and Tohka, Jussi
| 2,021 | null |
http://dx.doi.org/10.3390/jimaging7040066
|
10.3390/jimaging7040066
|
Journal of Imaging
|
Transfer Learning in Magnetic Resonance Brain Imaging: a Systematic
Review
|
Transfer Learning in Magnetic Resonance Brain Imaging
|
https://www.researchgate.net/publication/350576269_Transfer_Learning_in_Magnetic_Resonance_Brain_Imaging_A_Systematic_Review
|
The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
smat
|
\cite{smat}
|
Employing deep learning and transfer learning for accurate brain tumor detection
| null | null | true | false |
Mathivanan, Sandeep Kumar and Sonaimuthu, Sridevi and Murugesan, Sankar and Rajadurai, Hariharan and Shivahare, Basu Dev and Shah, Mohd Asif
| 2,024 | null | null |
10.1038/s41598-024-57970-7
|
Scientific Reports
|
Employing deep learning and transfer learning for accurate brain tumor detection
|
(PDF) Employing deep learning and transfer learning for accurate ...
|
https://www.researchgate.net/publication/379337705_Employing_deep_learning_and_transfer_learning_for_accurate_brain_tumor_detection
|
This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Vtha
|
\cite{Vtha}
|
SinGAN-Seg: Synthetic training data generation for medical image segmentation
| null | null | true | false |
Thambawita, Vajira AND Salehi, Pegah AND Sheshkal, Sajad Amouei AND Hicks, Steven A. AND Hammer, Hugo L. AND Parasa, Sravanthi AND Lange, Thomas de AND Halvorsen, Pål AND Riegler, Michael A.
| 2,022 |
05
|
https://doi.org/10.1371/journal.pone.0267976
|
10.1371/journal.pone.0267976
|
PLOS ONE
|
SinGAN-Seg: Synthetic training data generation for medical image segmentation
|
SinGAN-Seg: Synthetic training data generation for medical image segmentation
|
http://arxiv.org/pdf/2107.00471v2
|
Analyzing medical data to find abnormalities is a time-consuming and costly
task, particularly for rare abnormalities, requiring tremendous efforts from
medical experts. Artificial intelligence has become a popular tool for the
automatic processing of medical data, acting as a supportive tool for doctors.
However, the machine learning models used to build these tools are highly
dependent on the data used to train them. Large amounts of data can be
difficult to obtain in medicine due to privacy, expensive and time-consuming
annotations, and a general lack of data samples for infrequent lesions. Here,
we present a novel synthetic data generation pipeline, called SinGAN-Seg, to
produce synthetic medical images with corresponding masks using a single
training image. Our method is different from the traditional GANs because our
model needs only a single image and the corresponding ground truth to train.
Our method produces alternative artificial segmentation datasets with ground
truth masks when real datasets are not allowed to share. The pipeline is
evaluated using qualitative and quantitative comparisons between real and
synthetic data to show that the style transfer technique used in our pipeline
significantly improves the quality of the generated data and our method is
better than other state-of-the-art GANs to prepare synthetic images when the
size of training datasets are limited. By training UNet++ using both real and
the synthetic data generated from the SinGAN-Seg pipeline, we show that models
trained with synthetic data have very close performances to those trained on
real data when the datasets have a considerable amount of data. In contrast,
Synthetic data generated from the SinGAN-Seg pipeline can improve the
performance of segmentation models when training datasets do not have a
considerable amount of data. The code is available on GitHub.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Awah
|
\cite{Awah}
|
CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved
Covid-19 Detection
|
http://arxiv.org/abs/2103.05094v1
|
Coronavirus (COVID-19) is a viral disease caused by severe acute respiratory
syndrome coronavirus 2 (SARS-CoV-2). The spread of COVID-19 seems to have a
detrimental effect on the global economy and health. A positive chest X-ray of
infected patients is a crucial step in the battle against COVID-19. Early
results suggest that abnormalities exist in chest X-rays of patients suggestive
of COVID-19. This has led to the introduction of a variety of deep learning
systems and studies have shown that the accuracy of COVID-19 patient detection
through the use of chest X-rays is strongly optimistic. Deep learning networks
like convolutional neural networks (CNNs) need a substantial amount of training
data. Because the outbreak is recent, it is difficult to gather a significant
number of radiographic images in such a short time. Therefore, in this
research, we present a method to generate synthetic chest X-ray (CXR) images by
developing an Auxiliary Classifier Generative Adversarial Network (ACGAN) based
model called CovidGAN. In addition, we demonstrate that the synthetic images
produced from CovidGAN can be utilized to enhance the performance of CNN for
COVID-19 detection. Classification using CNN alone yielded 85% accuracy. By
adding synthetic images produced by CovidGAN, the accuracy increased to 95%. We
hope this method will speed up COVID-19 detection and lead to more robust
systems of radiology.
| true | true |
Waheed, Abdul and Goyal, Muskan and Gupta, Deepak and Khanna, Ashish and Al-Turjman, Fadi and Pinheiro, Plácido Rogerio
| 2,020 | null | null |
10.1109/ACCESS.2020.2994762
|
IEEE Access
|
CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved
Covid-19 Detection
|
(PDF) CovidGAN: Data Augmentation using Auxiliary Classifier GAN ...
|
https://www.researchgate.net/publication/341401062_CovidGAN_Data_Augmentation_using_Auxiliary_Classifier_GAN_for_Improved_Covid-19_Detection
|
By adding synthetic images produced by CovidGAN, the accuracy increased to 95%. We hope this method will speed up COVID-19 detection and lead to
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Bahm
|
\cite{Bahm}
|
Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks
| null | null | true | false |
Ahmad, Bilal and Sun, Jun and You, Qi and Palade, Vasile and Mao, Zhongjie
| 2,022 | null |
https://www.mdpi.com/2227-9059/10/2/223
| null |
Biomedicines
|
Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks
|
(PDF) Brain Tumor Classification Using a Combination of Variational ...
|
https://www.researchgate.net/publication/358017457_Brain_Tumor_Classification_Using_a_Combination_of_Variational_Autoencoders_and_Generative_Adversarial_Networks
|
This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Hzha
|
\cite{Hzha}
|
QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps
| null | null | true | false |
Zhang, Hang and Nguyen, Thanh D. and Zhang, Jinwei and Marcille, Melanie and Spincemaille, Pascal and Wang, Yi and Gauthier, Susan A. and Sweeney, Elizabeth M.
| 2,022 | null |
https://www.sciencedirect.com/science/article/pii/S2213158222000444
|
https://doi.org/10.1016/j.nicl.2022.102979
|
NeuroImage: Clinical
|
QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps
|
QSMRim-Net: Imbalance-aware learning for identification of chronic ...
|
https://pubmed.ncbi.nlm.nih.gov/35247730/
|
QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps - PubMed We present QSMRim-Net, a data imbalance-aware deep neural network that fuses lesion-level radiomic and convolutional image features for automated identification of rim + lesions on QSM. * Fully automated detection of paramagnetic rims in multiple sclerosis lesions on 3T susceptibility-based MR imaging.Lou C, Sati P, Absinta M, Clark K, Dworkin JD, Valcarcel AM, Schindler MK, Reich DS, Sweeney EM, Shinohara RT.Lou C, et al.Neuroimage Clin. * Quantitative susceptibility mapping versus phase imaging to identify multiple sclerosis iron rim lesions with demyelination.Huang W, Sweeney EM, Kaunzner UW, Wang Y, Gauthier SA, Nguyen TD.Huang W, et al.J Neuroimaging.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Ddab
|
\cite{Ddab}
|
DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data
|
http://arxiv.org/abs/2105.02340v1
|
Despite over two decades of progress, imbalanced data is still considered a
significant challenge for contemporary machine learning models. Modern advances
in deep learning have magnified the importance of the imbalanced data problem.
The two main approaches to address this issue are based on loss function
modifications and instance resampling. Instance sampling is typically based on
Generative Adversarial Networks (GANs), which may suffer from mode collapse.
Therefore, there is a need for an oversampling method that is specifically
tailored to deep learning models, can work on raw images while preserving their
properties, and is capable of generating high quality, artificial images that
can enhance minority classes and balance the training set. We propose DeepSMOTE
- a novel oversampling algorithm for deep learning models. It is simple, yet
effective in its design. It consists of three major components: (i) an
encoder/decoder framework; (ii) SMOTE-based oversampling; and (iii) a dedicated
loss function that is enhanced with a penalty term. An important advantage of
DeepSMOTE over GAN-based oversampling is that DeepSMOTE does not require a
discriminator, and it generates high-quality artificial images that are both
information-rich and suitable for visual inspection. DeepSMOTE code is publicly
available at: https://github.com/dd1github/DeepSMOTE
| true | true |
Damien Dablain and Bartosz Krawczyk and Nitesh V. Chawla
| 2,021 | null |
https://arxiv.org/abs/2105.02340
| null | null |
DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data
|
DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data
|
http://arxiv.org/pdf/2105.02340v1
|
Despite over two decades of progress, imbalanced data is still considered a
significant challenge for contemporary machine learning models. Modern advances
in deep learning have magnified the importance of the imbalanced data problem.
The two main approaches to address this issue are based on loss function
modifications and instance resampling. Instance sampling is typically based on
Generative Adversarial Networks (GANs), which may suffer from mode collapse.
Therefore, there is a need for an oversampling method that is specifically
tailored to deep learning models, can work on raw images while preserving their
properties, and is capable of generating high quality, artificial images that
can enhance minority classes and balance the training set. We propose DeepSMOTE
- a novel oversampling algorithm for deep learning models. It is simple, yet
effective in its design. It consists of three major components: (i) an
encoder/decoder framework; (ii) SMOTE-based oversampling; and (iii) a dedicated
loss function that is enhanced with a penalty term. An important advantage of
DeepSMOTE over GAN-based oversampling is that DeepSMOTE does not require a
discriminator, and it generates high-quality artificial images that are both
information-rich and suitable for visual inspection. DeepSMOTE code is publicly
available at: https://github.com/dd1github/DeepSMOTE
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Msal
|
\cite{Msal}
|
Multiple Sclerosis Lesion Synthesis in MRI using an encoder-decoder
U-NET
|
http://arxiv.org/abs/1901.05733v1
|
In this paper, we propose generating synthetic multiple sclerosis (MS)
lesions on MRI images with the final aim to improve the performance of
supervised machine learning algorithms, therefore avoiding the problem of the
lack of available ground truth. We propose a two-input two-output fully
convolutional neural network model for MS lesion synthesis in MRI images. The
lesion information is encoded as discrete binary intensity level masks passed
to the model and stacked with the input images. The model is trained end-to-end
without the need for manually annotating the lesions in the training set. We
then perform the generation of synthetic lesions on healthy images via
registration of patient images, which are subsequently used for data
augmentation to increase the performance for supervised MS lesion detection
algorithms. Our pipeline is evaluated on MS patient data from an in-house
clinical dataset and the public ISBI2015 challenge dataset. The evaluation is
based on measuring the similarities between the real and the synthetic images
as well as in terms of lesion detection performance by segmenting both the
original and synthetic images individually using a state-of-the-art
segmentation framework. We also demonstrate the usage of synthetic MS lesions
generated on healthy images as data augmentation. We analyze a scenario of
limited training data (one-image training) to demonstrate the effect of the
data augmentation on both datasets. Our results significantly show the
effectiveness of the usage of synthetic MS lesion images. For the ISBI2015
challenge, our one-image model trained using only a single image plus the
synthetic data augmentation strategy showed a performance similar to that of
other CNN methods that were fully trained using the entire training set,
yielding a comparable human expert rater performance
| true | true |
Salem, Mostafa and Valverde, Sergi and Cabezas, Mariano and Pareto, Deborah and Oliver, Arnau and Salvi, Joaquim and Rovira, Àlex and Lladó, Xavier
| 2,019 | null | null |
10.1109/ACCESS.2019.2900198
|
IEEE Access
|
Multiple Sclerosis Lesion Synthesis in MRI using an encoder-decoder
U-NET
|
(PDF) Multiple Sclerosis Lesion Synthesis in MRI using an encoder ...
|
https://www.researchgate.net/publication/331238531_Multiple_Sclerosis_Lesion_Synthesis_in_MRI_using_an_encoder-decoder_U-NET
|
In this paper, we propose generating synthetic multiple sclerosis (MS) lesions on MRI images with the final aim to improve the performance of supervised machine
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Igoo
|
\cite{Igoo}
|
Generative Adversarial Networks
|
http://arxiv.org/abs/1406.2661v1
|
We propose a new framework for estimating generative models via an
adversarial process, in which we simultaneously train two models: a generative
model G that captures the data distribution, and a discriminative model D that
estimates the probability that a sample came from the training data rather than
G. The training procedure for G is to maximize the probability of D making a
mistake. This framework corresponds to a minimax two-player game. In the space
of arbitrary functions G and D, a unique solution exists, with G recovering the
training data distribution and D equal to 1/2 everywhere. In the case where G
and D are defined by multilayer perceptrons, the entire system can be trained
with backpropagation. There is no need for any Markov chains or unrolled
approximate inference networks during either training or generation of samples.
Experiments demonstrate the potential of the framework through qualitative and
quantitative evaluation of the generated samples.
| true | true |
Ian J. Goodfellow and Jean Pouget-Abadie and Mehdi Mirza and Bing Xu and David Warde-Farley and Sherjil Ozair and Aaron Courville and Yoshua Bengio
| 2,014 | null |
https://arxiv.org/abs/1406.2661
| null | null |
Generative Adversarial Networks
|
Generative Adversarial Networks
|
http://arxiv.org/pdf/1406.2661v1
|
We propose a new framework for estimating generative models via an
adversarial process, in which we simultaneously train two models: a generative
model G that captures the data distribution, and a discriminative model D that
estimates the probability that a sample came from the training data rather than
G. The training procedure for G is to maximize the probability of D making a
mistake. This framework corresponds to a minimax two-player game. In the space
of arbitrary functions G and D, a unique solution exists, with G recovering the
training data distribution and D equal to 1/2 everywhere. In the case where G
and D are defined by multilayer perceptrons, the entire system can be trained
with backpropagation. There is no need for any Markov chains or unrolled
approximate inference networks during either training or generation of samples.
Experiments demonstrate the potential of the framework through qualitative and
quantitative evaluation of the generated samples.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Wxia
|
\cite{Wxia}
|
GAN Inversion: A Survey
|
http://arxiv.org/abs/2101.05278v5
|
GAN inversion aims to invert a given image back into the latent space of a
pretrained GAN model, for the image to be faithfully reconstructed from the
inverted code by the generator. As an emerging technique to bridge the real and
fake image domains, GAN inversion plays an essential role in enabling the
pretrained GAN models such as StyleGAN and BigGAN to be used for real image
editing applications. Meanwhile, GAN inversion also provides insights on the
interpretation of GAN's latent space and how the realistic images can be
generated. In this paper, we provide an overview of GAN inversion with a focus
on its recent algorithms and applications. We cover important techniques of GAN
inversion and their applications to image restoration and image manipulation.
We further elaborate on some trends and challenges for future directions.
| true | true |
Weihao Xia and Yulun Zhang and Yujiu Yang and Jing-Hao Xue and Bolei Zhou and Ming-Hsuan Yang
| 2,022 | null |
https://arxiv.org/abs/2101.05278
| null | null |
GAN Inversion: A Survey
|
GAN Inversion: A Survey
|
http://arxiv.org/pdf/2101.05278v5
|
GAN inversion aims to invert a given image back into the latent space of a
pretrained GAN model, for the image to be faithfully reconstructed from the
inverted code by the generator. As an emerging technique to bridge the real and
fake image domains, GAN inversion plays an essential role in enabling the
pretrained GAN models such as StyleGAN and BigGAN to be used for real image
editing applications. Meanwhile, GAN inversion also provides insights on the
interpretation of GAN's latent space and how the realistic images can be
generated. In this paper, we provide an overview of GAN inversion with a focus
on its recent algorithms and applications. We cover important techniques of GAN
inversion and their applications to image restoration and image manipulation.
We further elaborate on some trends and challenges for future directions.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Mmir
|
\cite{Mmir}
|
Conditional Generative Adversarial Nets
|
http://arxiv.org/abs/1411.1784v1
|
Generative Adversarial Nets [8] were recently introduced as a novel way to
train generative models. In this work we introduce the conditional version of
generative adversarial nets, which can be constructed by simply feeding the
data, y, we wish to condition on to both the generator and discriminator. We
show that this model can generate MNIST digits conditioned on class labels. We
also illustrate how this model could be used to learn a multi-modal model, and
provide preliminary examples of an application to image tagging in which we
demonstrate how this approach can generate descriptive tags which are not part
of training labels.
| true | true |
Mehdi Mirza and
Simon Osindero
| 2,014 | null |
http://arxiv.org/abs/1411.1784
| null |
CoRR
|
Conditional Generative Adversarial Nets
|
Conditional Generative Adversarial Nets
|
http://arxiv.org/pdf/1411.1784v1
|
Generative Adversarial Nets [8] were recently introduced as a novel way to
train generative models. In this work we introduce the conditional version of
generative adversarial nets, which can be constructed by simply feeding the
data, y, we wish to condition on to both the generator and discriminator. We
show that this model can generate MNIST digits conditioned on class labels. We
also illustrate how this model could be used to learn a multi-modal model, and
provide preliminary examples of an application to image tagging in which we
demonstrate how this approach can generate descriptive tags which are not part
of training labels.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Kthe
|
\cite{Kthe}
|
Robustness of Conditional GANs to Noisy Labels
|
http://arxiv.org/abs/1811.03205v1
|
We study the problem of learning conditional generators from noisy labeled
samples, where the labels are corrupted by random noise. A standard training of
conditional GANs will not only produce samples with wrong labels, but also
generate poor quality samples. We consider two scenarios, depending on whether
the noise model is known or not. When the distribution of the noise is known,
we introduce a novel architecture which we call Robust Conditional GAN (RCGAN).
The main idea is to corrupt the label of the generated sample before feeding to
the adversarial discriminator, forcing the generator to produce samples with
clean labels. This approach of passing through a matching noisy channel is
justified by corresponding multiplicative approximation bounds between the loss
of the RCGAN and the distance between the clean real distribution and the
generator distribution. This shows that the proposed approach is robust, when
used with a carefully chosen discriminator architecture, known as projection
discriminator. When the distribution of the noise is not known, we provide an
extension of our architecture, which we call RCGAN-U, that learns the noise
model simultaneously while training the generator. We show experimentally on
MNIST and CIFAR-10 datasets that both the approaches consistently improve upon
baseline approaches, and RCGAN-U closely matches the performance of RCGAN.
| true | true |
Kiran Koshy Thekumparampil and Ashish Khetan and Zinan Lin and Sewoong Oh
| 2,018 | null |
https://arxiv.org/abs/1811.03205
| null | null |
Robustness of Conditional GANs to Noisy Labels
|
Robustness of Conditional GANs to Noisy Labels
|
http://arxiv.org/pdf/1811.03205v1
|
We study the problem of learning conditional generators from noisy labeled
samples, where the labels are corrupted by random noise. A standard training of
conditional GANs will not only produce samples with wrong labels, but also
generate poor quality samples. We consider two scenarios, depending on whether
the noise model is known or not. When the distribution of the noise is known,
we introduce a novel architecture which we call Robust Conditional GAN (RCGAN).
The main idea is to corrupt the label of the generated sample before feeding to
the adversarial discriminator, forcing the generator to produce samples with
clean labels. This approach of passing through a matching noisy channel is
justified by corresponding multiplicative approximation bounds between the loss
of the RCGAN and the distance between the clean real distribution and the
generator distribution. This shows that the proposed approach is robust, when
used with a carefully chosen discriminator architecture, known as projection
discriminator. When the distribution of the noise is not known, we provide an
extension of our architecture, which we call RCGAN-U, that learns the noise
model simultaneously while training the generator. We show experimentally on
MNIST and CIFAR-10 datasets that both the approaches consistently improve upon
baseline approaches, and RCGAN-U closely matches the performance of RCGAN.
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Wehua
|
\cite{Wehua}
|
Correcting Noisy Multilabel Predictions: Modeling Label Noise through
Latent Space Shifts
|
http://arxiv.org/abs/2502.14281v3
|
Noise in data appears to be inevitable in most real-world machine learning
applications and would cause severe overfitting problems. Not only can data
features contain noise, but labels are also prone to be noisy due to human
input. In this paper, rather than noisy label learning in multiclass
classifications, we instead focus on the less explored area of noisy label
learning for multilabel classifications. Specifically, we investigate the
post-correction of predictions generated from classifiers learned with noisy
labels. The reasons are two-fold. Firstly, this approach can directly work with
the trained models to save computational resources. Secondly, it could be
applied on top of other noisy label correction techniques to achieve further
improvements. To handle this problem, we appeal to deep generative approaches
that are possible for uncertainty estimation. Our model posits that label noise
arises from a stochastic shift in the latent variable, providing a more robust
and beneficial means for noisy learning. We develop both unsupervised and
semi-supervised learning methods for our model. The extensive empirical study
presents solid evidence to that our approach is able to consistently improve
the independent models and performs better than a number of existing methods
across various noisy label settings. Moreover, a comprehensive empirical
analysis of the proposed method is carried out to validate its robustness,
including sensitivity analysis and an ablation study, among other elements.
| true | true |
Weipeng Huang and Qin Li and Yang Xiao and Cheng Qiao and Tie Cai and Junwei Liao and Neil J. Hurley and Guangyuan Piao
| 2,025 | null |
https://arxiv.org/abs/2502.14281
| null | null |
Correcting Noisy Multilabel Predictions: Modeling Label Noise through
Latent Space Shifts
|
[PDF] Correcting Noisy Multilabel Predictions: Modeling Label Noise ...
|
http://arxiv.org/pdf/2502.14281
|
Once the shifted latent variable still locates in the right latent space, the generated label noise will also follow the pattern. (in particular
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Hbae
|
\cite{Hbae}
|
From Noisy Prediction to True Label: Noisy Prediction Calibration via
Generative Model
|
http://arxiv.org/abs/2205.00690v3
|
Noisy labels are inevitable yet problematic in machine learning society. It
ruins the generalization of a classifier by making the classifier over-fitted
to noisy labels. Existing methods on noisy label have focused on modifying the
classifier during the training procedure. It has two potential problems. First,
these methods are not applicable to a pre-trained classifier without further
access to training. Second, it is not easy to train a classifier and regularize
all negative effects from noisy labels, simultaneously. We suggest a new branch
of method, Noisy Prediction Calibration (NPC) in learning with noisy labels.
Through the introduction and estimation of a new type of transition matrix via
generative model, NPC corrects the noisy prediction from the pre-trained
classifier to the true label as a post-processing scheme. We prove that NPC
theoretically aligns with the transition matrix based methods. Yet, NPC
empirically provides more accurate pathway to estimate true label, even without
involvement in classifier learning. Also, NPC is applicable to any classifier
trained with noisy label methods, if training instances and its predictions are
available. Our method, NPC, boosts the classification performances of all
baseline models on both synthetic and real-world datasets. The implemented code
is available at https://github.com/BaeHeeSun/NPC.
| true | true |
HeeSun Bae and Seungjae Shin and Byeonghu Na and JoonHo Jang and Kyungwoo Song and Il-Chul Moon
| 2,022 | null |
https://arxiv.org/abs/2205.00690
| null | null |
From Noisy Prediction to True Label: Noisy Prediction Calibration via
Generative Model
|
[PDF] Noisy Prediction Calibration via Generative Model
|
https://icml.cc/media/icml-2022/Slides/18350_oZIPQgX.pdf
|
NPC models the relation between output of a classifier and the true label via generative model. NPC consistently boosts the classification performances of pre-
|
Synthetic Generation and Latent Projection Denoising of Rim Lesions in
Multiple Sclerosis
|
2505.23353v1
|
Vkel
|
\cite{Vkel}
|
Prior Image-Constrained Reconstruction using Style-Based Generative
Models
|
http://arxiv.org/abs/2102.12525v2
|
Obtaining a useful estimate of an object from highly incomplete imaging
measurements remains a holy grail of imaging science. Deep learning methods
have shown promise in learning object priors or constraints to improve the
conditioning of an ill-posed imaging inverse problem. In this study, a
framework for estimating an object of interest that is semantically related to
a known prior image, is proposed. An optimization problem is formulated in the
disentangled latent space of a style-based generative model, and semantically
meaningful constraints are imposed using the disentangled latent representation
of the prior image. Stable recovery from incomplete measurements with the help
of a prior image is theoretically analyzed. Numerical experiments demonstrating
the superior performance of our approach as compared to related methods are
presented.
| true | true |
Kelkar, Varun A and Anastasio, Mark
| 2,021 |
18--24 Jul
|
https://proceedings.mlr.press/v139/kelkar21a.html
| null | null |
Prior Image-Constrained Reconstruction using Style-Based Generative
Models
|
Prior Image-Constrained Reconstruction using Style-Based ...
|
http://proceedings.mlr.press/v139/kelkar21a/kelkar21a.pdf
|
by VA Kelkar · 2021 · Cited by 33 — Style-based generative models have been known to be able to control individual semantic features, or styles, in an image by varying the disentangled. Page 2
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
bengio2009curriculum
|
\cite{bengio2009curriculum}
|
Curriculum learning
| null | null | true | false |
Bengio, Yoshua and Louradour, J\'{e}r\^{o}me and Collobert, Ronan and Weston, Jason
| 2,009 | null |
https://doi.org/10.1145/1553374.1553380
|
10.1145/1553374.1553380
| null |
Curriculum learning
|
Curriculum learning
|
https://en.wikipedia.org/wiki/Curriculum_learning
|
Curriculum learning is a technique in machine learning in which a model is trained on examples of increasing difficulty.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
cl_survey
|
\cite{cl_survey}
|
Curriculum Learning: A Survey
|
http://arxiv.org/abs/2101.10382v3
|
Training machine learning models in a meaningful order, from the easy samples
to the hard ones, using curriculum learning can provide performance
improvements over the standard training approach based on random data
shuffling, without any additional computational costs. Curriculum learning
strategies have been successfully employed in all areas of machine learning, in
a wide range of tasks. However, the necessity of finding a way to rank the
samples from easy to hard, as well as the right pacing function for introducing
more difficult data can limit the usage of the curriculum approaches. In this
survey, we show how these limits have been tackled in the literature, and we
present different curriculum learning instantiations for various tasks in
machine learning. We construct a multi-perspective taxonomy of curriculum
learning approaches by hand, considering various classification criteria. We
further build a hierarchical tree of curriculum learning methods using an
agglomerative clustering algorithm, linking the discovered clusters with our
taxonomy. At the end, we provide some interesting directions for future work.
| true | true |
Petru Soviany and Radu Tudor Ionescu and Paolo Rota and Nicu Sebe
| 2,022 | null |
https://arxiv.org/abs/2101.10382
| null | null |
Curriculum Learning: A Survey
|
Curriculum Learning: A Survey
|
http://arxiv.org/pdf/2101.10382v3
|
Training machine learning models in a meaningful order, from the easy samples
to the hard ones, using curriculum learning can provide performance
improvements over the standard training approach based on random data
shuffling, without any additional computational costs. Curriculum learning
strategies have been successfully employed in all areas of machine learning, in
a wide range of tasks. However, the necessity of finding a way to rank the
samples from easy to hard, as well as the right pacing function for introducing
more difficult data can limit the usage of the curriculum approaches. In this
survey, we show how these limits have been tackled in the literature, and we
present different curriculum learning instantiations for various tasks in
machine learning. We construct a multi-perspective taxonomy of curriculum
learning approaches by hand, considering various classification criteria. We
further build a hierarchical tree of curriculum learning methods using an
agglomerative clustering algorithm, linking the discovered clusters with our
taxonomy. At the end, we provide some interesting directions for future work.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
cl_nlu
|
\cite{cl_nlu}
|
Curriculum Learning for Natural Language Understanding
| null | null | true | false |
Xu, Benfeng and
Zhang, Licheng and
Mao, Zhendong and
Wang, Quan and
Xie, Hongtao and
Zhang, Yongdong
| 2,020 | null |
https://aclanthology.org/2020.acl-main.542
|
10.18653/v1/2020.acl-main.542
| null |
Curriculum Learning for Natural Language Understanding
|
[PDF] Curriculum Learning for Natural Language Understanding - Digie
|
https://api.digie.ai/publications/Curriculum-Learning-for-NLU.pdf
|
Natural Language Understanding (NLU), which re- quires machines to understand and reason with hu- man language, is a crucial yet challenging problem. Recently,
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
cl_bert
|
\cite{cl_bert}
|
Pre-training a {BERT} with Curriculum Learning by Increasing Block-Size of Input Text
| null | null | true | false |
Nagatsuka, Koichi and
Broni-Bediako, Clifford and
Atsumi, Masayasu
| 2,021 | null |
https://aclanthology.org/2021.ranlp-1.112
| null | null |
Pre-training a {BERT} with Curriculum Learning by Increasing Block-Size of Input Text
|
Pre-training a BERT with Curriculum Learning by Increasing Block ...
|
https://aclanthology.org/2021.ranlp-1.112/
|
We propose a new CL method which gradually increases the block-size of input text for training the self-attention mechanism of BERT and its variants.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
bert_lrc
|
\cite{bert_lrc}
|
Modeling Easiness for Training Transformers with Curriculum Learning
| null | null | true | false |
Ranaldi, Leonardo and
Pucci, Giulia and
Zanzotto, Fabio Massimo
| 2,023 | null |
https://aclanthology.org/2023.ranlp-1.101
| null | null |
Modeling Easiness for Training Transformers with Curriculum Learning
|
Modeling Easiness for Training Transformers with Curriculum ...
|
https://aclanthology.org/2023.ranlp-1.101/
|
In this paper, building on Curriculum Learning, we propose a novel, linguistically motivated measure to determine example complexity for organizing examples
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
orca
|
\cite{orca}
|
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
|
http://arxiv.org/abs/2306.02707v1
|
Recent research has focused on enhancing the capability of smaller models
through imitation learning, drawing on the outputs generated by large
foundation models (LFMs). A number of issues impact the quality of these
models, ranging from limited imitation signals from shallow LFM outputs; small
scale homogeneous training data; and most notably a lack of rigorous evaluation
resulting in overestimating the small model's capability as they tend to learn
to imitate the style, but not the reasoning process of LFMs. To address these
challenges, we develop Orca (We are working with our legal team to publicly
release a diff of the model weights in accordance with LLaMA's release policy
to be published at https://aka.ms/orca-lm), a 13-billion parameter model that
learns to imitate the reasoning process of LFMs. Orca learns from rich signals
from GPT-4 including explanation traces; step-by-step thought processes; and
other complex instructions, guided by teacher assistance from ChatGPT. To
promote this progressive learning, we tap into large-scale and diverse
imitation data with judicious sampling and selection. Orca surpasses
conventional state-of-the-art instruction-tuned models such as Vicuna-13B by
more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard
(BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH
benchmark and shows competitive performance (4 pts gap with optimized system
message) in professional and academic examinations like the SAT, LSAT, GRE, and
GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our
research indicates that learning from step-by-step explanations, whether these
are generated by humans or more advanced AI models, is a promising direction to
improve model capabilities and skills.
| true | true |
Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah
| 2,023 | null |
https://arxiv.org/abs/2306.02707
| null | null |
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
|
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
|
http://arxiv.org/pdf/2306.02707v1
|
Recent research has focused on enhancing the capability of smaller models
through imitation learning, drawing on the outputs generated by large
foundation models (LFMs). A number of issues impact the quality of these
models, ranging from limited imitation signals from shallow LFM outputs; small
scale homogeneous training data; and most notably a lack of rigorous evaluation
resulting in overestimating the small model's capability as they tend to learn
to imitate the style, but not the reasoning process of LFMs. To address these
challenges, we develop Orca (We are working with our legal team to publicly
release a diff of the model weights in accordance with LLaMA's release policy
to be published at https://aka.ms/orca-lm), a 13-billion parameter model that
learns to imitate the reasoning process of LFMs. Orca learns from rich signals
from GPT-4 including explanation traces; step-by-step thought processes; and
other complex instructions, guided by teacher assistance from ChatGPT. To
promote this progressive learning, we tap into large-scale and diverse
imitation data with judicious sampling and selection. Orca surpasses
conventional state-of-the-art instruction-tuned models such as Vicuna-13B by
more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard
(BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH
benchmark and shows competitive performance (4 pts gap with optimized system
message) in professional and academic examinations like the SAT, LSAT, GRE, and
GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our
research indicates that learning from step-by-step explanations, whether these
are generated by humans or more advanced AI models, is a promising direction to
improve model capabilities and skills.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
curr_instr
|
\cite{curr_instr}
|
Instruction Tuning with Human Curriculum
|
http://arxiv.org/abs/2310.09518v4
|
In this work, we (1) introduce Curriculum Instruction Tuning, (2) explore the
potential advantages of employing diverse curriculum strategies, and (3)
delineate a synthetic instruction-response generation framework that
complements our theoretical approach. Distinct from the existing instruction
tuning dataset, our generation pipeline is systematically structured to emulate
the sequential and orderly characteristic of human learning. Additionally, we
describe a methodology for generating instruction-response datasets that
extensively span the various stages of human education, from middle school
through the graduate level, utilizing educational subject catalogs.
Before training, we meticulously organize the instruction data to ensure that
questions escalate in difficulty regarding (A) the subject matter and (B) the
intricacy of the instructions. The findings of our study reveal that
substantial improvements in performance can be achieved through the mere
application of curriculum ordering to instruction data (achieving gains of
+4.76 on TruthfulQA, +2.98 on MMLU, +2.8 on OpenbookQA, and +1.28 on ARC-hard)
compared to random shuffling. This enhancement is achieved without incurring
additional computational expenses. Through comprehensive experimentation, we
observe that the advantages of our proposed method are consistently evident
across nine benchmarks.
| true | true |
Lee, Bruce W and
Cho, Hyunsoo and
Yoo, Kang Min
| 2,024 | null |
https://aclanthology.org/2024.findings-naacl.82
|
10.18653/v1/2024.findings-naacl.82
| null |
Instruction Tuning with Human Curriculum
|
Instruction Tuning with Human Curriculum
|
http://arxiv.org/pdf/2310.09518v4
|
In this work, we (1) introduce Curriculum Instruction Tuning, (2) explore the
potential advantages of employing diverse curriculum strategies, and (3)
delineate a synthetic instruction-response generation framework that
complements our theoretical approach. Distinct from the existing instruction
tuning dataset, our generation pipeline is systematically structured to emulate
the sequential and orderly characteristic of human learning. Additionally, we
describe a methodology for generating instruction-response datasets that
extensively span the various stages of human education, from middle school
through the graduate level, utilizing educational subject catalogs.
Before training, we meticulously organize the instruction data to ensure that
questions escalate in difficulty regarding (A) the subject matter and (B) the
intricacy of the instructions. The findings of our study reveal that
substantial improvements in performance can be achieved through the mere
application of curriculum ordering to instruction data (achieving gains of
+4.76 on TruthfulQA, +2.98 on MMLU, +2.8 on OpenbookQA, and +1.28 on ARC-hard)
compared to random shuffling. This enhancement is achieved without incurring
additional computational expenses. Through comprehensive experimentation, we
observe that the advantages of our proposed method are consistently evident
across nine benchmarks.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
feng2024
|
\cite{feng2024}
|
Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase
Pretraining
|
http://arxiv.org/abs/2412.15285v1
|
Pretraining large language models effectively requires strategic data
selection, blending and ordering. However, key details about data mixtures
especially their scalability to longer token horizons and larger model sizes
remain underexplored due to limited disclosure by model developers. To address
this, we formalize the concept of two-phase pretraining and conduct an
extensive systematic study on how to select and mix data to maximize model
accuracies for the two phases. Our findings illustrate that a two-phase
approach for pretraining outperforms random data ordering and natural
distribution of tokens by 3.4% and 17% on average accuracies. We provide
in-depth guidance on crafting optimal blends based on quality of the data
source and the number of epochs to be seen. We propose to design blends using
downsampled data at a smaller scale of 1T tokens and then demonstrate effective
scaling of our approach to larger token horizon of 15T tokens and larger model
size of 25B model size. These insights provide a series of steps practitioners
can follow to design and scale their data blends.
| true | true |
Steven Feng and Shrimai Prabhumoye and Kezhi Kong and Dan Su and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro
| 2,024 | null |
https://arxiv.org/abs/2412.15285
| null | null |
Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase
Pretraining
|
Maximize Your Data's Potential: Enhancing LLM Accuracy with Two ...
|
https://arxiv.org/abs/2412.15285
|
A two-phase approach for pretraining outperforms random data ordering and natural distribution of tokens by 3.4% and 17% on average accuracies.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
babylm_2023
|
\cite{babylm_2023}
|
Findings of the {B}aby{LM} Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
| null | null | true | false |
Warstadt, Alex and
Mueller, Aaron and
Choshen, Leshem and
Wilcox, Ethan and
Zhuang, Chengxu and
Ciro, Juan and
Mosquera, Rafael and
Paranjabe, Bhargavi and
Williams, Adina and
Linzen, Tal and
Cotterell, Ryan
| 2,023 | null |
https://aclanthology.org/2023.conll-babylm.1
|
10.18653/v1/2023.conll-babylm.1
| null |
Findings of the {B}aby{LM} Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
|
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on ...
|
https://aclanthology.org/2023.conll-babylm.1/
|
The BabyLM Challenge findings focus on sample-efficient pretraining on developmentally plausible corpora, presented at the 27th Conference on Computational
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
babylm_2024
|
\cite{babylm_2024}
|
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on
Developmentally Plausible Corpora
|
http://arxiv.org/abs/2412.05149v1
|
The BabyLM Challenge is a community effort to close the data-efficiency gap
between human and computational language learners. Participants compete to
optimize language model training on a fixed language data budget of 100 million
words or less. This year, we released improved text corpora, as well as a
vision-and-language corpus to facilitate research into cognitively plausible
vision language models. Submissions were compared on evaluation tasks targeting
grammatical ability, (visual) question answering, pragmatic abilities, and
grounding, among other abilities. Participants could submit to a 10M-word
text-only track, a 100M-word text-only track, and/or a 100M-word and image
multimodal track. From 31 submissions employing diverse methods, a hybrid
causal-masked language model architecture outperformed other approaches. No
submissions outperformed the baselines in the multimodal track. In follow-up
analyses, we found a strong relationship between training FLOPs and average
performance across tasks, and that the best-performing submissions proposed
changes to the training data, training objective, and model architecture. This
year's BabyLM Challenge shows that there is still significant room for
innovation in this setting, in particular for image-text modeling, but
community-driven research can yield actionable insights about effective
strategies for small-scale language modeling.
| true | true |
Michael Y. Hu and Aaron Mueller and Candace Ross and Adina Williams and Tal Linzen and Chengxu Zhuang and Ryan Cotterell and Leshem Choshen and Alex Warstadt and Ethan Gotlieb Wilcox
| 2,024 | null |
https://arxiv.org/abs/2412.05149
| null | null |
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on
Developmentally Plausible Corpora
|
[2504.08165] Findings of the BabyLM Challenge
|
https://arxiv.org/abs/2504.08165
|
View a PDF of the paper titled Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora, by Alex Warstadt and 10 other authors From over 30 submissions, we extract concrete recommendations on how best to train data-efficient language models, and on where future efforts should (and perhaps should not) focus. View a PDF of the paper titled Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora, by Alex Warstadt and 10 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
less_is_more
|
\cite{less_is_more}
|
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models
with Cognitively-Plausible Curriculum Learning Strategies
|
http://arxiv.org/abs/2410.22886v2
|
Curriculum Learning has been a popular strategy to improve the cognitive
plausibility of Small-Scale Language Models (SSLMs) in the BabyLM Challenge.
However, it has not led to considerable improvements over non-curriculum
models. We assess whether theoretical linguistic acquisition theories can be
used to specify more fine-grained curriculum learning strategies, creating
age-ordered corpora of Child-Directed Speech for four typologically distant
language families to implement SSLMs and acquisition-inspired curricula
cross-lingually. Comparing the success of three objective curricula (Growing,
Inwards and MMM) that precisely replicate the predictions of acquisition
theories on a standard SSLM architecture, we find fine-grained
acquisition-inspired curricula can outperform non-curriculum baselines and
performance benefits of curricula strategies in SSLMs can be derived by
specifying fine-grained language-specific curricula that precisely replicate
language acquisition theories.
| true | true |
Suchir Salhan and Richard Diehl Martinez and Zébulon Goriely and Paula Buttery
| 2,024 | null |
https://arxiv.org/abs/2410.22886
| null | null |
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models
with Cognitively-Plausible Curriculum Learning Strategies
|
Suchir Salhan - Google Scholar
|
https://scholar.google.com/citations?user=xOo9sisAAAAJ&hl=en
|
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies. S Salhan, RD Martinez, Z Goriely
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
prophetnet
|
\cite{prophetnet}
|
{P}rophet{N}et: Predicting Future N-gram for Sequence-to-{S}equence{P}re-training
| null | null | true | false |
Qi, Weizhen and
Yan, Yu and
Gong, Yeyun and
Liu, Dayiheng and
Duan, Nan and
Chen, Jiusheng and
Zhang, Ruofei and
Zhou, Ming
| 2,020 | null |
https://aclanthology.org/2020.findings-emnlp.217
|
10.18653/v1/2020.findings-emnlp.217
| null |
{P}rophet{N}et: Predicting Future N-gram for Sequence-to-{S}equence{P}re-training
|
ProphetNet: Predicting Future N-gram for Sequence-to- ...
|
https://arxiv.org/abs/2001.04063
|
by W Qi · 2020 · Cited by 542 — This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
future_lens
|
\cite{future_lens}
|
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
|
http://arxiv.org/abs/2311.04897v1
|
We conjecture that hidden state vectors corresponding to individual input
tokens encode information sufficient to accurately predict several tokens
ahead. More concretely, in this paper we ask: Given a hidden (internal)
representation of a single token at position $t$ in an input, can we reliably
anticipate the tokens that will appear at positions $\geq t + 2$? To test this,
we measure linear approximation and causal intervention methods in GPT-J-6B to
evaluate the degree to which individual hidden states in the network contain
signal rich enough to predict future hidden states and, ultimately, token
outputs. We find that, at some layers, we can approximate a model's output with
more than 48% accuracy with respect to its prediction of subsequent tokens
through a single hidden state. Finally we present a "Future Lens" visualization
that uses these methods to create a new view of transformer states.
| true | true |
Pal, Koyena and
Sun, Jiuding and
Yuan, Andrew and
Wallace, Byron and
Bau, David
| 2,023 | null |
https://aclanthology.org/2023.conll-1.37
|
10.18653/v1/2023.conll-1.37
| null |
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
|
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
|
http://arxiv.org/pdf/2311.04897v1
|
We conjecture that hidden state vectors corresponding to individual input
tokens encode information sufficient to accurately predict several tokens
ahead. More concretely, in this paper we ask: Given a hidden (internal)
representation of a single token at position $t$ in an input, can we reliably
anticipate the tokens that will appear at positions $\geq t + 2$? To test this,
we measure linear approximation and causal intervention methods in GPT-J-6B to
evaluate the degree to which individual hidden states in the network contain
signal rich enough to predict future hidden states and, ultimately, token
outputs. We find that, at some layers, we can approximate a model's output with
more than 48% accuracy with respect to its prediction of subsequent tokens
through a single hidden state. Finally we present a "Future Lens" visualization
that uses these methods to create a new view of transformer states.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
gloeckle2024mtp
|
\cite{gloeckle2024mtp}
|
Better & Faster Large Language Models via Multi-token Prediction
|
http://arxiv.org/abs/2404.19737v1
|
Large language models such as GPT and Llama are trained with a next-token
prediction loss. In this work, we suggest that training language models to
predict multiple future tokens at once results in higher sample efficiency.
More specifically, at each position in the training corpus, we ask the model to
predict the following n tokens using n independent output heads, operating on
top of a shared model trunk. Considering multi-token prediction as an auxiliary
training task, we measure improved downstream capabilities with no overhead in
training time for both code and natural language models. The method is
increasingly useful for larger model sizes, and keeps its appeal when training
for multiple epochs. Gains are especially pronounced on generative benchmarks
like coding, where our models consistently outperform strong baselines by
several percentage points. Our 13B parameter models solves 12 % more problems
on HumanEval and 17 % more on MBPP than comparable next-token models.
Experiments on small algorithmic tasks demonstrate that multi-token prediction
is favorable for the development of induction heads and algorithmic reasoning
capabilities. As an additional benefit, models trained with 4-token prediction
are up to 3 times faster at inference, even with large batch sizes.
| true | true |
Fabian Gloeckle and Badr Youbi Idrissi and Baptiste Rozière and David Lopez-Paz and Gabriel Synnaeve
| 2,024 | null |
https://arxiv.org/abs/2404.19737
| null | null |
Better & Faster Large Language Models via Multi-token Prediction
|
Better & Faster Large Language Models via Multi-token ...
|
https://www.reddit.com/r/LocalLLaMA/comments/1dj9xql/better_faster_large_language_models_via/
|
In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
blockwise_parallel_decoding
|
\cite{blockwise_parallel_decoding}
|
Blockwise Parallel Decoding for Deep Autoregressive Models
|
http://arxiv.org/abs/1811.03115v1
|
Deep autoregressive sequence-to-sequence models have demonstrated impressive
performance across a wide variety of tasks in recent years. While common
architecture classes such as recurrent, convolutional, and self-attention
networks make different trade-offs between the amount of computation needed per
layer and the length of the critical path at training time, generation still
remains an inherently sequential process. To overcome this limitation, we
propose a novel blockwise parallel decoding scheme in which we make predictions
for multiple time steps in parallel then back off to the longest prefix
validated by a scoring model. This allows for substantial theoretical
improvements in generation speed when applied to architectures that can process
output sequences in parallel. We verify our approach empirically through a
series of experiments using state-of-the-art self-attention models for machine
translation and image super-resolution, achieving iteration reductions of up to
2x over a baseline greedy decoder with no loss in quality, or up to 7x in
exchange for a slight decrease in performance. In terms of wall-clock time, our
fastest models exhibit real-time speedups of up to 4x over standard greedy
decoding.
| true | true |
Stern, Mitchell and Shazeer, Noam and Uszkoreit, Jakob
| 2,018 | null |
https://proceedings.neurips.cc/paper_files/paper/2018/file/c4127b9194fe8562c64dc0f5bf2c93bc-Paper.pdf
| null | null |
Blockwise Parallel Decoding for Deep Autoregressive Models
|
Blockwise Parallel Decoding for Deep Autoregressive Models
|
http://arxiv.org/pdf/1811.03115v1
|
Deep autoregressive sequence-to-sequence models have demonstrated impressive
performance across a wide variety of tasks in recent years. While common
architecture classes such as recurrent, convolutional, and self-attention
networks make different trade-offs between the amount of computation needed per
layer and the length of the critical path at training time, generation still
remains an inherently sequential process. To overcome this limitation, we
propose a novel blockwise parallel decoding scheme in which we make predictions
for multiple time steps in parallel then back off to the longest prefix
validated by a scoring model. This allows for substantial theoretical
improvements in generation speed when applied to architectures that can process
output sequences in parallel. We verify our approach empirically through a
series of experiments using state-of-the-art self-attention models for machine
translation and image super-resolution, achieving iteration reductions of up to
2x over a baseline greedy decoder with no loss in quality, or up to 7x in
exchange for a slight decrease in performance. In terms of wall-clock time, our
fastest models exhibit real-time speedups of up to 4x over standard greedy
decoding.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
layerskip
|
\cite{layerskip}
|
{L}ayer{S}kip: Enabling Early Exit Inference and Self-Speculative Decoding
| null | null | true | false |
Elhoushi, Mostafa and
Shrivastava, Akshat and
Liskovich, Diana and
Hosmer, Basil and
Wasti, Bram and
Lai, Liangzhen and
Mahmoud, Anas and
Acun, Bilge and
Agarwal, Saurabh and
Roman, Ahmed and
Aly, Ahmed and
Chen, Beidi and
Wu, Carole-Jean
| 2,024 | null |
https://aclanthology.org/2024.acl-long.681
|
10.18653/v1/2024.acl-long.681
| null |
{L}ayer{S}kip: Enabling Early Exit Inference and Self-Speculative Decoding
|
Enabling Early Exit Inference and Self-Speculative Decoding
|
https://aclanthology.org/2024.acl-long.681/
|
We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
kangaroo
|
\cite{kangaroo}
|
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
|
http://arxiv.org/abs/2404.18911v1
|
Speculative decoding has demonstrated its effectiveness in accelerating the
inference of large language models while maintaining a consistent sampling
distribution. However, the conventional approach of training a separate draft
model to achieve a satisfactory token acceptance rate can be costly. Drawing
inspiration from early exiting, we propose a novel self-speculative decoding
framework \emph{Kangaroo}, which uses a fixed shallow sub-network as a
self-draft model, with the remaining layers serving as the larger target model.
We train a lightweight and efficient adapter module on top of the sub-network
to bridge the gap between the sub-network and the full model's representation
ability. It is noteworthy that the inference latency of the self-draft model
may no longer be negligible compared to the large model, necessitating
strategies to increase the token acceptance rate while minimizing the drafting
steps of the small model. To address this challenge, we introduce an additional
early exiting mechanism for generating draft tokens. Specifically, we halt the
small model's subsequent prediction during the drafting phase once the
confidence level for the current token falls below a certain threshold.
Extensive experiments on the Spec-Bench demonstrate the effectiveness of
Kangaroo. Under single-sequence verification, Kangaroo achieves speedups up to
$1.68\times$ on Spec-Bench, outperforming Medusa-1 with 88.7\% fewer additional
parameters (67M compared to 591M). The code for Kangaroo is available at
https://github.com/Equationliu/Kangaroo.
| true | true |
Fangcheng Liu and Yehui Tang and Zhenhua Liu and Yunsheng Ni and Kai Han and Yunhe Wang
| 2,024 | null |
https://arxiv.org/abs/2404.18911
| null | null |
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
|
NeurIPS Poster Kangaroo: Lossless Self-Speculative Decoding for ...
|
https://neurips.cc/virtual/2024/poster/93829
|
Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting However, the conventional approach of training separate draft model to achieve a satisfactory token acceptance rate can be costly and impractical. In this paper, we propose a novel self-speculative decoding framework \emph{Kangaroo} with \emph{double} early exiting strategy, which leverages the shallow sub-network and the \texttt{LM Head} of the well-trained target LLM to construct a self-drafting model. One significant challenge that comes with the proposed method is that the inference latency of the self-draft model may no longer be negligible compared to the big model. To boost the token acceptance rate while minimizing the latency of the self-drafting model, we introduce an additional \emph{early exiting} mechanism for both single-sequence and the tree decoding scenarios. NeurIPS uses cookies for essential functions only.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
draft_verify
|
\cite{draft_verify}
|
Draft & Verify: Lossless Large Language Model Acceleration via
Self-Speculative Decoding
|
http://arxiv.org/abs/2309.08168v2
|
We present a novel inference scheme, self-speculative decoding, for
accelerating Large Language Models (LLMs) without the need for an auxiliary
model. This approach is characterized by a two-stage process: drafting and
verification. The drafting stage generates draft tokens at a slightly lower
quality but more quickly, which is achieved by selectively skipping certain
intermediate layers during drafting. Subsequently, the verification stage
employs the original LLM to validate those draft output tokens in one forward
pass. This process ensures the final output remains identical to that produced
by the unaltered LLM. Moreover, the proposed method requires no additional
neural network training and no extra memory footprint, making it a
plug-and-play and cost-effective solution for inference acceleration.
Benchmarks with LLaMA-2 and its variants demonstrated a speedup up to
1.99$\times$.
| true | true |
Zhang, Jun and
Wang, Jue and
Li, Huan and
Shou, Lidan and
Chen, Ke and
Chen, Gang and
Mehrotra, Sharad
| 2,024 | null |
https://aclanthology.org/2024.acl-long.607
|
10.18653/v1/2024.acl-long.607
| null |
Draft & Verify: Lossless Large Language Model Acceleration via
Self-Speculative Decoding
|
Draft & Verify: Lossless Large Language Model ...
|
https://aclanthology.org/2024.acl-long.607/
|
by J Zhang · 2024 · Cited by 130 — We present a novel inference scheme, self-speculative decoding, for accelerating Large Language Models (LLMs) without the need for an auxiliary model.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
swift
|
\cite{swift}
|
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference
Acceleration
|
http://arxiv.org/abs/2410.06916v2
|
Speculative decoding (SD) has emerged as a widely used paradigm to accelerate
LLM inference without compromising quality. It works by first employing a
compact model to draft multiple tokens efficiently and then using the target
LLM to verify them in parallel. While this technique has achieved notable
speedups, most existing approaches necessitate either additional parameters or
extensive training to construct effective draft models, thereby restricting
their applicability across different LLMs and tasks. To address this
limitation, we explore a novel plug-and-play SD solution with layer-skipping,
which skips intermediate layers of the target LLM as the compact draft model.
Our analysis reveals that LLMs exhibit great potential for self-acceleration
through layer sparsity and the task-specific nature of this sparsity. Building
on these insights, we introduce SWIFT, an on-the-fly self-speculative decoding
algorithm that adaptively selects intermediate layers of LLMs to skip during
inference. SWIFT does not require auxiliary models or additional training,
making it a plug-and-play solution for accelerating LLM inference across
diverse input data streams. Our extensive experiments across a wide range of
models and downstream tasks demonstrate that SWIFT can achieve over a 1.3x-1.6x
speedup while preserving the original distribution of the generated text. We
release our code in https://github.com/hemingkx/SWIFT.
| true | true |
Heming Xia and Yongqi Li and Jun Zhang and Cunxiao Du and Wenjie Li
| 2,024 | null |
https://arxiv.org/abs/2410.06916
| null | null |
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference
Acceleration
|
SWIFT: On-the-Fly Self-Speculative Decoding for LLM ...
|
https://github.com/hemingkx/SWIFT
|
SWIFT is an on-the-fly self-speculative decoding algorithm that adaptively selects intermediate layers of LLMs to skip during inference.
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
koala
|
\cite{koala}
|
KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft
Heads with Adversarial Learning
|
http://arxiv.org/abs/2408.08146v1
|
Large Language Models (LLMs) exhibit high inference latency due to their
autoregressive decoding nature. While the draft head in speculative decoding
mitigates this issue, its full potential remains unexplored. In this paper, we
introduce KOALA (K-layer Optimized Adversarial Learning Architecture), an
orthogonal approach to the draft head. By transforming the conventional
single-layer draft head into a multi-layer architecture and incorporating
adversarial learning into the traditional supervised training, KOALA
significantly improves the accuracy of the draft head in predicting subsequent
tokens, thus more closely mirroring the functionality of LLMs. Although this
improvement comes at the cost of slightly increased drafting overhead, KOALA
substantially unlocks the draft head's potential, greatly enhancing speculative
decoding. We conducted comprehensive evaluations of KOALA, including both
autoregressive and non-autoregressive draft heads across various tasks,
demonstrating a latency speedup ratio improvement of 0.24x-0.41x, which is
10.57%-14.09% faster than the original draft heads.
| true | true |
Kaiqi Zhang and Jing Zhao and Rui Chen
| 2,024 | null |
https://arxiv.org/abs/2408.08146
| null | null |
KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft
Heads with Adversarial Learning
|
hemingkx/SpeculativeDecodingPapers: Must-read papers ... - GitHub
|
https://github.com/hemingkx/SpeculativeDecodingPapers
|
[pdf], 2024.08. KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning Kaiqi Zhang, Jing Zhao, Rui Chen. [pdf]
|
Pre-Training Curriculum for Multi-Token Prediction in Language Models
|
2505.22757v1
|
medusa
|
\cite{medusa}
|
Medusa: Simple LLM Inference Acceleration Framework with Multiple
Decoding Heads
|
http://arxiv.org/abs/2401.10774v3
|
Large Language Models (LLMs) employ auto-regressive decoding that requires
sequential computation, with each step reliant on the previous one's output.
This creates a bottleneck as each step necessitates moving the full model
parameters from High-Bandwidth Memory (HBM) to the accelerator's cache. While
methods such as speculative decoding have been suggested to address this issue,
their implementation is impeded by the challenges associated with acquiring and
maintaining a separate draft model. In this paper, we present Medusa, an
efficient method that augments LLM inference by adding extra decoding heads to
predict multiple subsequent tokens in parallel. Using a tree-based attention
mechanism, Medusa constructs multiple candidate continuations and verifies them
simultaneously in each decoding step. By leveraging parallel processing, Medusa
substantially reduces the number of decoding steps required. We present two
levels of fine-tuning procedures for Medusa to meet the needs of different use
cases: Medusa-1: Medusa is directly fine-tuned on top of a frozen backbone LLM,
enabling lossless inference acceleration. Medusa-2: Medusa is fine-tuned
together with the backbone LLM, enabling better prediction accuracy of Medusa
heads and higher speedup but needing a special training recipe that preserves
the backbone model's capabilities.
Moreover, we propose several extensions that improve or expand the utility of
Medusa, including a self-distillation to handle situations where no training
data is available and a typical acceptance scheme to boost the acceptance rate
while maintaining generation quality. We evaluate Medusa on models of various
sizes and training procedures. Our experiments demonstrate that Medusa-1 can
achieve over 2.2x speedup without compromising generation quality, while
Medusa-2 further improves the speedup to 2.3-3.6x.
| true | true |
Tianle Cai and Yuhong Li and Zhengyang Geng and Hongwu Peng and Jason D. Lee and Deming Chen and Tri Dao
| 2,024 | null |
https://arxiv.org/abs/2401.10774
| null | null |
Medusa: Simple LLM Inference Acceleration Framework with Multiple
Decoding Heads
|
Medusa: Simple Framework for Accelerating LLM ...
|
https://github.com/FasterDecoding/Medusa
|
Medusa is a simple framework that democratizes the acceleration techniques for LLM generation with multiple decoding heads.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
lee1985determination
|
\cite{lee1985determination}
|
Determination of {3D} human body postures from a single view
| null | null | true | false |
Lee, Hsi-Jian and Chen, Zen
| 1,985 | null | null | null |
Computer Vision, Graphics, and Image Processing
|
Determination of {3D} human body postures from a single view
|
Determination of 3D human body postures from a single view
|
https://www.sciencedirect.com/science/article/abs/pii/0734189X85900945
|
In this paper a method is proposed to recover and interpret the 3D body structures of a person from a single view, provided that (1) at least six feature points on the head and a set of body joints are available on the image plane, and (2) the geometry of head and lengths of body segments formed by joints are known. 2007, Computer Vision and Image Understanding Show abstract Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. 2001, Computer Vision and Image Understanding Show abstract A comprehensive survey of computer vision-based human motion capture literature from the past two decades is presented. * ### Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
mehta2017monocular
|
\cite{mehta2017monocular}
|
Monocular {3D} human pose estimation in the wild using improved cnn supervision
| null | null | true | false |
Mehta, Dushyant and Rhodin, Helge and Casas, Dan and Fua, Pascal and Sotnychenko, Oleksandr and Xu, Weipeng and Theobalt, Christian
| 2,017 | null | null | null | null |
Monocular {3D} human pose estimation in the wild using improved cnn supervision
|
Monocular 3D Human Pose Estimation In The Wild Using Improved ...
|
https://arxiv.org/abs/1611.09813
|
Authors:Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, Christian Theobalt View a PDF of the paper titled Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision, by Dushyant Mehta and 6 other authors View a PDF of the paper titled Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision, by Dushyant Mehta and 6 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
pavlakos2017coarse
|
\cite{pavlakos2017coarse}
|
Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose
|
http://arxiv.org/abs/1611.07828v2
|
This paper addresses the challenge of 3D human pose estimation from a single
color image. Despite the general success of the end-to-end learning paradigm,
top performing approaches employ a two-step solution consisting of a
Convolutional Network (ConvNet) for 2D joint localization and a subsequent
optimization step to recover 3D pose. In this paper, we identify the
representation of 3D pose as a critical issue with current ConvNet approaches
and make two important contributions towards validating the value of end-to-end
learning for this task. First, we propose a fine discretization of the 3D space
around the subject and train a ConvNet to predict per voxel likelihoods for
each joint. This creates a natural representation for 3D pose and greatly
improves performance over the direct regression of joint coordinates. Second,
to further improve upon initial estimates, we employ a coarse-to-fine
prediction scheme. This step addresses the large dimensionality increase and
enables iterative refinement and repeated processing of the image features. The
proposed approach outperforms all state-of-the-art methods on standard
benchmarks achieving a relative error reduction greater than 30% on average.
Additionally, we investigate using our volumetric representation in a related
architecture which is suboptimal compared to our end-to-end approach, but is of
practical interest, since it enables training when no image with corresponding
3D groundtruth is available, and allows us to present compelling results for
in-the-wild images.
| true | true |
Pavlakos, Georgios and Zhou, Xiaowei and Derpanis, Konstantinos G and Daniilidis, Kostas
| 2,017 | null | null | null | null |
Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose
|
Coarse-to-Fine Volumetric Prediction for Single-Image 3D ...
|
https://arxiv.org/abs/1611.07828
|
Image 2: arxiv logo>cs> arXiv:1611.07828 **arXiv:1611.07828** (cs) View a PDF of the paper titled Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose, by Georgios Pavlakos and 3 other authors View a PDF of the paper titled Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose, by Georgios Pavlakos and 3 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
cai2019exploiting
|
\cite{cai2019exploiting}
|
Exploiting spatial-temporal relationships for {3D} pose estimation via graph convolutional networks
| null | null | true | false |
Cai, Yujun and Ge, Liuhao and Liu, Jun and Cai, Jianfei and Cham, Tat-Jen and Yuan, Junsong and Thalmann, Nadia Magnenat
| 2,019 | null | null | null | null |
Exploiting spatial-temporal relationships for {3D} pose estimation via graph convolutional networks
|
vanoracai/Exploiting-Spatial-temporal-Relationships-for- ...
|
https://github.com/vanoracai/Exploiting-Spatial-temporal-Relationships-for-3D-Pose-Estimation-via-Graph-Convolutional-Networks
|
This is the code for the paper ICCV 2019 Exploiting Spatial-temporal Relationships for 3D Pose Estimation via Graph Convolutional Networks in Pytorch.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
martinez2017simple
|
\cite{martinez2017simple}
|
A simple yet effective baseline for 3d human pose estimation
|
http://arxiv.org/abs/1705.03098v2
|
Following the success of deep convolutional networks, state-of-the-art
methods for 3d human pose estimation have focused on deep end-to-end systems
that predict 3d joint locations given raw image pixels. Despite their excellent
performance, it is often not easy to understand whether their remaining error
stems from a limited 2d pose (visual) understanding, or from a failure to map
2d poses into 3-dimensional positions. With the goal of understanding these
sources of error, we set out to build a system that given 2d joint locations
predicts 3d positions. Much to our surprise, we have found that, with current
technology, "lifting" ground truth 2d joint locations to 3d space is a task
that can be solved with a remarkably low error rate: a relatively simple deep
feed-forward network outperforms the best reported result by about 30\% on
Human3.6M, the largest publicly available 3d pose estimation benchmark.
Furthermore, training our system on the output of an off-the-shelf
state-of-the-art 2d detector (\ie, using images as input) yields state of the
art results -- this includes an array of systems that have been trained
end-to-end specifically for this task. Our results indicate that a large
portion of the error of modern deep 3d pose estimation systems stems from their
visual analysis, and suggests directions to further advance the state of the
art in 3d human pose estimation.
| true | true |
Martinez, Julieta and Hossain, Rayat and Romero, Javier and Little, James J
| 2,017 | null | null | null | null |
A simple yet effective baseline for 3d human pose estimation
|
A simple yet effective baseline for 3d human pose estimation
|
http://arxiv.org/pdf/1705.03098v2
|
Following the success of deep convolutional networks, state-of-the-art
methods for 3d human pose estimation have focused on deep end-to-end systems
that predict 3d joint locations given raw image pixels. Despite their excellent
performance, it is often not easy to understand whether their remaining error
stems from a limited 2d pose (visual) understanding, or from a failure to map
2d poses into 3-dimensional positions. With the goal of understanding these
sources of error, we set out to build a system that given 2d joint locations
predicts 3d positions. Much to our surprise, we have found that, with current
technology, "lifting" ground truth 2d joint locations to 3d space is a task
that can be solved with a remarkably low error rate: a relatively simple deep
feed-forward network outperforms the best reported result by about 30\% on
Human3.6M, the largest publicly available 3d pose estimation benchmark.
Furthermore, training our system on the output of an off-the-shelf
state-of-the-art 2d detector (\ie, using images as input) yields state of the
art results -- this includes an array of systems that have been trained
end-to-end specifically for this task. Our results indicate that a large
portion of the error of modern deep 3d pose estimation systems stems from their
visual analysis, and suggests directions to further advance the state of the
art in 3d human pose estimation.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
zhao2019semantic
|
\cite{zhao2019semantic}
|
{Semantic Graph Convolutional Networks for 3D Human Pose Regression}
| null | null | true | false |
Zhao, Long and Peng, Xi and Tian, Yu and Kapadia, Mubbasir and Metaxas, Dimitris N
| 2,019 | null | null | null | null |
{Semantic Graph Convolutional Networks for 3D Human Pose Regression}
|
Semantic Graph Convolutional Networks for 3D Human ...
|
https://openaccess.thecvf.com/content_CVPR_2019/papers/Zhao_Semantic_Graph_Convolutional_Networks_for_3D_Human_Pose_Regression_CVPR_2019_paper.pdf
|
by L Zhao · 2019 · Cited by 714 — SemGCN is a novel network for regression tasks with graph data, capturing semantic information, and applied to 3D human pose regression.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
zou2021modulated
|
\cite{zou2021modulated}
|
Modulated graph convolutional network for {3D} human pose estimation
| null | null | true | false |
Zou, Zhiming and Tang, Wei
| 2,021 | null | null | null | null |
Modulated graph convolutional network for {3D} human pose estimation
|
Modulated Graph Convolutional Network for 3D Human Pose ...
|
https://ieeexplore.ieee.org/document/9710217/
|
The graph convolutional network (GCN) has recently achieved promising performance of 3D human pose estimation (HPE) by modeling the relationship among body
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
zhao2022graformer
|
\cite{zhao2022graformer}
|
{GraFormer: Graph-oriented Transformer for {3D} Pose Estimation}
| null | null | true | false |
Zhao, Weixi and Wang, Weiqiang and Tian, Yunjie
| 2,022 | null | null | null | null |
{GraFormer: Graph-oriented Transformer for {3D} Pose Estimation}
|
[PDF] GraFormer: Graph-Oriented Transformer for 3D Pose Estimation
|
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_GraFormer_Graph-Oriented_Transformer_for_3D_Pose_Estimation_CVPR_2022_paper.pdf
|
In this paper, we use a new transformer architecture by embedding graph convolution operations to improve the. 3D pose estimation. 3. Method. As shown in Figure
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
ZhongTMM2024
|
\cite{ZhongTMM2024}
|
{Frame-Padded Multiscale Transformer for Monocular {3D} Human Pose Estimation}
| null | null | true | false |
Zhong, Yuanhong and Yang, Guangxia and Zhong, Daidi and Yang, Xun and Wang, Shanshan
| 2,024 | null | null |
10.1109/TMM.2023.3347095
|
IEEE Transactions on Multimedia
|
{Frame-Padded Multiscale Transformer for Monocular {3D} Human Pose Estimation}
|
Frame-Padded Multiscale Transformer for Monocular 3D Human ...
|
https://dl.acm.org/doi/10.1109/TMM.2023.3347095
|
Abstract. Monocular 3D human pose estimation is an ill-posed problem in computer vision due to its depth ambiguity. Most existing works supplement the depth
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
WangTMM2024
|
\cite{WangTMM2024}
|
{Exploiting Temporal Correlations for {3D} Human Pose Estimation}
| null | null | true | false |
Wang, Ruibin and Ying, Xianghua and Xing, Bowei
| 2,024 | null | null |
10.1109/TMM.2023.3323874
|
IEEE Transactions on Multimedia
|
{Exploiting Temporal Correlations for {3D} Human Pose Estimation}
|
Exploiting Temporal Correlations for 3D Human Pose ...
|
http://ieeexplore.ieee.org/document/10278485/
|
Exploiting the rich temporal information in human pose sequences to facilitate 3D pose estimation has garnered particular attention.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
tang20233d
|
\cite{tang20233d}
|
{3D} human pose estimation with spatio-temporal criss-cross attention
| null | null | true | false |
Tang, Zhenhua and Qiu, Zhaofan and Hao, Yanbin and Hong, Richang and Yao, Ting
| 2,023 | null | null | null | null |
{3D} human pose estimation with spatio-temporal criss-cross attention
|
zhenhuat/STCFormer: (CVPR2023)3D Human Pose ...
|
https://github.com/zhenhuat/STCFormer
|
This is the readme file for the code release of 3D Human Pose Estimation with Spatio-Temporal Criss-cross Attention on PyTorch platform.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
li2022mhformer
|
\cite{li2022mhformer}
|
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
|
http://arxiv.org/abs/2111.12707v4
|
Estimating 3D human poses from monocular videos is a challenging task due to
depth ambiguity and self-occlusion. Most existing works attempt to solve both
issues by exploiting spatial and temporal relationships. However, those works
ignore the fact that it is an inverse problem where multiple feasible solutions
(i.e., hypotheses) exist. To relieve this limitation, we propose a
Multi-Hypothesis Transformer (MHFormer) that learns spatio-temporal
representations of multiple plausible pose hypotheses. In order to effectively
model multi-hypothesis dependencies and build strong relationships across
hypothesis features, the task is decomposed into three stages: (i) Generate
multiple initial hypothesis representations; (ii) Model self-hypothesis
communication, merge multiple hypotheses into a single converged representation
and then partition it into several diverged hypotheses; (iii) Learn
cross-hypothesis communication and aggregate the multi-hypothesis features to
synthesize the final 3D pose. Through the above processes, the final
representation is enhanced and the synthesized pose is much more accurate.
Extensive experiments show that MHFormer achieves state-of-the-art results on
two challenging datasets: Human3.6M and MPI-INF-3DHP. Without bells and
whistles, its performance surpasses the previous best result by a large margin
of 3% on Human3.6M. Code and models are available at
\url{https://github.com/Vegetebird/MHFormer}.
| true | true |
Li, Wenhao and Liu, Hong and Tang, Hao and Wang, Pichao and Van Gool, Luc
| 2,022 | null | null | null | null |
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
|
Multi-Hypothesis Transformer for 3D Human Pose Estimation - arXiv
|
https://arxiv.org/abs/2111.12707
|
We propose a Multi-Hypothesis Transformer (MHFormer) that learns spatio-temporal representations of multiple plausible pose hypotheses.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
liu2023posynda
|
\cite{liu2023posynda}
|
PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D
Human Pose Estimation
|
http://arxiv.org/abs/2308.09678v2
|
Existing 3D human pose estimators face challenges in adapting to new datasets
due to the lack of 2D-3D pose pairs in training sets. To overcome this issue,
we propose \textit{Multi-Hypothesis \textbf{P}ose \textbf{Syn}thesis
\textbf{D}omain \textbf{A}daptation} (\textbf{PoSynDA}) framework to bridge
this data disparity gap in target domain. Typically, PoSynDA uses a
diffusion-inspired structure to simulate 3D pose distribution in the target
domain. By incorporating a multi-hypothesis network, PoSynDA generates diverse
pose hypotheses and aligns them with the target domain. To do this, it first
utilizes target-specific source augmentation to obtain the target domain
distribution data from the source domain by decoupling the scale and position
parameters. The process is then further refined through the teacher-student
paradigm and low-rank adaptation. With extensive comparison of benchmarks such
as Human3.6M and MPI-INF-3DHP, PoSynDA demonstrates competitive performance,
even comparable to the target-trained MixSTE model\cite{zhang2022mixste}. This
work paves the way for the practical application of 3D human pose estimation in
unseen domains. The code is available at https://github.com/hbing-l/PoSynDA.
| true | true |
Liu, Hanbing and He, Jun-Yan and Cheng, Zhi-Qi and Xiang, Wangmeng and Yang, Qize and Chai, Wenhao and Wang, Gaoang and Bao, Xu and Luo, Bin and Geng, Yifeng and others
| 2,023 | null | null | null | null |
PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D
Human Pose Estimation
|
PoSynDA: Multi-Hypothesis Pose Synthesis Domain ...
|
https://github.com/hbing-l/PoSynDA
|
PoSynDA is a novel framework for 3D Human Pose Estimation (3D HPE) that addresses the challenges of adapting to new datasets due to the scarcity of 2D-3D
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
chen2023hdformer
|
\cite{chen2023hdformer}
|
HDFormer: High-order Directed Transformer for 3D Human Pose Estimation
|
http://arxiv.org/abs/2302.01825v2
|
Human pose estimation is a challenging task due to its structured data
sequence nature. Existing methods primarily focus on pair-wise interaction of
body joints, which is insufficient for scenarios involving overlapping joints
and rapidly changing poses. To overcome these issues, we introduce a novel
approach, the High-order Directed Transformer (HDFormer), which leverages
high-order bone and joint relationships for improved pose estimation.
Specifically, HDFormer incorporates both self-attention and high-order
attention to formulate a multi-order attention module. This module facilitates
first-order "joint$\leftrightarrow$joint", second-order
"bone$\leftrightarrow$joint", and high-order "hyperbone$\leftrightarrow$joint"
interactions, effectively addressing issues in complex and occlusion-heavy
situations. In addition, modern CNN techniques are integrated into the
transformer-based architecture, balancing the trade-off between performance and
efficiency. HDFormer significantly outperforms state-of-the-art (SOTA) models
on Human3.6M and MPI-INF-3DHP datasets, requiring only 1/10 of the parameters
and significantly lower computational costs. Moreover, HDFormer demonstrates
broad real-world applicability, enabling real-time, accurate 3D pose
estimation. The source code is in https://github.com/hyer/HDFormer
| true | true |
Chen, Hanyuan and He, Jun-Yan and Xiang, Wangmeng and Cheng, Zhi-Qi and Liu, Wei and Liu, Hanbing and Luo, Bin and Geng, Yifeng and Xie, Xuansong
| 2,023 | null | null | null | null |
HDFormer: High-order Directed Transformer for 3D Human Pose Estimation
|
High-order Directed Transformer for 3D Human Pose Estimation
|
https://arxiv.org/abs/2302.01825
|
HDFormer is a novel approach for 3D human pose estimation using high-order bone and joint relationships, addressing issues with overlapping
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
hu2021conditional
|
\cite{hu2021conditional}
|
Conditional Directed Graph Convolution for 3D Human Pose Estimation
|
http://arxiv.org/abs/2107.07797v2
|
Graph convolutional networks have significantly improved 3D human pose
estimation by representing the human skeleton as an undirected graph. However,
this representation fails to reflect the articulated characteristic of human
skeletons as the hierarchical orders among the joints are not explicitly
presented. In this paper, we propose to represent the human skeleton as a
directed graph with the joints as nodes and bones as edges that are directed
from parent joints to child joints. By so doing, the directions of edges can
explicitly reflect the hierarchical relationships among the nodes. Based on
this representation, we further propose a spatial-temporal conditional directed
graph convolution to leverage varying non-local dependence for different poses
by conditioning the graph topology on input poses. Altogether, we form a
U-shaped network, named U-shaped Conditional Directed Graph Convolutional
Network, for 3D human pose estimation from monocular videos. To evaluate the
effectiveness of our method, we conducted extensive experiments on two
challenging large-scale benchmarks: Human3.6M and MPI-INF-3DHP. Both
quantitative and qualitative results show that our method achieves top
performance. Also, ablation studies show that directed graphs can better
exploit the hierarchy of articulated human skeletons than undirected graphs,
and the conditional connections can yield adaptive graph topologies for
different poses.
| true | true |
Hu, Wenbo and Zhang, Changgong and Zhan, Fangneng and Zhang, Lei and Wong, Tien-Tsin
| 2,021 | null | null | null | null |
Conditional Directed Graph Convolution for 3D Human Pose Estimation
|
Conditional Directed Graph Convolution for 3D Human Pose Estimation
|
http://arxiv.org/pdf/2107.07797v2
|
Graph convolutional networks have significantly improved 3D human pose
estimation by representing the human skeleton as an undirected graph. However,
this representation fails to reflect the articulated characteristic of human
skeletons as the hierarchical orders among the joints are not explicitly
presented. In this paper, we propose to represent the human skeleton as a
directed graph with the joints as nodes and bones as edges that are directed
from parent joints to child joints. By so doing, the directions of edges can
explicitly reflect the hierarchical relationships among the nodes. Based on
this representation, we further propose a spatial-temporal conditional directed
graph convolution to leverage varying non-local dependence for different poses
by conditioning the graph topology on input poses. Altogether, we form a
U-shaped network, named U-shaped Conditional Directed Graph Convolutional
Network, for 3D human pose estimation from monocular videos. To evaluate the
effectiveness of our method, we conducted extensive experiments on two
challenging large-scale benchmarks: Human3.6M and MPI-INF-3DHP. Both
quantitative and qualitative results show that our method achieves top
performance. Also, ablation studies show that directed graphs can better
exploit the hierarchy of articulated human skeletons than undirected graphs,
and the conditional connections can yield adaptive graph topologies for
different poses.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
ci2019optimizing
|
\cite{ci2019optimizing}
|
Optimizing network structure for {3D} human pose estimation
| null | null | true | false |
Ci, Hai and Wang, Chunyu and Ma, Xiaoxuan and Wang, Yizhou
| 2,019 | null | null | null | null |
Optimizing network structure for {3D} human pose estimation
|
Optimizing Network Structure for 3D Human Pose Estimation
|
https://openaccess.thecvf.com/content_ICCV_2019/papers/Ci_Optimizing_Network_Structure_for_3D_Human_Pose_Estimation_ICCV_2019_paper.pdf
|
by H Ci · 2019 · Cited by 312 — A 3D human pose is naturally represented by a skele- tal graph parameterized by the 3D locations of the body joints such as elbows and knees. See Figure 1. When
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
liu2020comprehensive
|
\cite{liu2020comprehensive}
|
A comprehensive study of weight sharing in graph networks for {3D} human pose estimation
| null | null | true | false |
Liu, Kenkun and Ding, Rongqi and Zou, Zhiming and Wang, Le and Tang, Wei
| 2,020 | null | null | null | null |
A comprehensive study of weight sharing in graph networks for {3D} human pose estimation
|
A Comprehensive Study of Weight Sharing in Graph ...
|
https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550324.pdf
|
by K Liu · Cited by 182 — Graph convolutional networks (GCNs) have been applied to. 3D human pose estimation (HPE) from 2D body joint detections and have shown encouraging performance.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
wang2018non
|
\cite{wang2018non}
|
Non-local Neural Networks
|
http://arxiv.org/abs/1711.07971v3
|
Both convolutional and recurrent operations are building blocks that process
one local neighborhood at a time. In this paper, we present non-local
operations as a generic family of building blocks for capturing long-range
dependencies. Inspired by the classical non-local means method in computer
vision, our non-local operation computes the response at a position as a
weighted sum of the features at all positions. This building block can be
plugged into many computer vision architectures. On the task of video
classification, even without any bells and whistles, our non-local models can
compete or outperform current competition winners on both Kinetics and Charades
datasets. In static image recognition, our non-local models improve object
detection/segmentation and pose estimation on the COCO suite of tasks. Code is
available at https://github.com/facebookresearch/video-nonlocal-net .
| true | true |
Wang, Xiaolong and Girshick, Ross and Gupta, Abhinav and He, Kaiming
| 2,018 | null | null | null | null |
Non-local Neural Networks
|
[PDF] Non-Local Neural Networks - CVF Open Access
|
https://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Non-Local_Neural_Networks_CVPR_2018_paper.pdf
|
Non-local operations capture long-range dependencies by computing a weighted sum of features at all positions, unlike local operations. They are efficient and
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
gong2023diffpose
|
\cite{gong2023diffpose}
|
DiffPose: Toward More Reliable 3D Pose Estimation
|
http://arxiv.org/abs/2211.16940v3
|
Monocular 3D human pose estimation is quite challenging due to the inherent
ambiguity and occlusion, which often lead to high uncertainty and
indeterminacy. On the other hand, diffusion models have recently emerged as an
effective tool for generating high-quality images from noise. Inspired by their
capability, we explore a novel pose estimation framework (DiffPose) that
formulates 3D pose estimation as a reverse diffusion process. We incorporate
novel designs into our DiffPose to facilitate the diffusion process for 3D pose
estimation: a pose-specific initialization of pose uncertainty distributions, a
Gaussian Mixture Model-based forward diffusion process, and a
context-conditioned reverse diffusion process. Our proposed DiffPose
significantly outperforms existing methods on the widely used pose estimation
benchmarks Human3.6M and MPI-INF-3DHP. Project page:
https://gongjia0208.github.io/Diffpose/.
| true | true |
Gong, Jia and Foo, Lin Geng and Fan, Zhipeng and Ke, Qiuhong and Rahmani, Hossein and Liu, Jun
| 2,023 | null | null | null | null |
DiffPose: Toward More Reliable 3D Pose Estimation
|
DiffPose: Toward More Reliable 3D Pose Estimation
|
http://arxiv.org/pdf/2211.16940v3
|
Monocular 3D human pose estimation is quite challenging due to the inherent
ambiguity and occlusion, which often lead to high uncertainty and
indeterminacy. On the other hand, diffusion models have recently emerged as an
effective tool for generating high-quality images from noise. Inspired by their
capability, we explore a novel pose estimation framework (DiffPose) that
formulates 3D pose estimation as a reverse diffusion process. We incorporate
novel designs into our DiffPose to facilitate the diffusion process for 3D pose
estimation: a pose-specific initialization of pose uncertainty distributions, a
Gaussian Mixture Model-based forward diffusion process, and a
context-conditioned reverse diffusion process. Our proposed DiffPose
significantly outperforms existing methods on the widely used pose estimation
benchmarks Human3.6M and MPI-INF-3DHP. Project page:
https://gongjia0208.github.io/Diffpose/.
|
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose
Estimation
|
2506.02853v1
|
holmquist2023diffpose
|
\cite{holmquist2023diffpose}
|
DiffPose: Multi-hypothesis Human Pose Estimation using Diffusion models
|
http://arxiv.org/abs/2211.16487v1
|
Traditionally, monocular 3D human pose estimation employs a machine learning
model to predict the most likely 3D pose for a given input image. However, a
single image can be highly ambiguous and induces multiple plausible solutions
for the 2D-3D lifting step which results in overly confident 3D pose
predictors. To this end, we propose \emph{DiffPose}, a conditional diffusion
model, that predicts multiple hypotheses for a given input image. In comparison
to similar approaches, our diffusion model is straightforward and avoids
intensive hyperparameter tuning, complex network structures, mode collapse, and
unstable training. Moreover, we tackle a problem of the common two-step
approach that first estimates a distribution of 2D joint locations via
joint-wise heatmaps and consecutively approximates them based on first- or
second-moment statistics. Since such a simplification of the heatmaps removes
valid information about possibly correct, though labeled unlikely, joint
locations, we propose to represent the heatmaps as a set of 2D joint candidate
samples. To extract information about the original distribution from these
samples we introduce our \emph{embedding transformer} that conditions the
diffusion model. Experimentally, we show that DiffPose slightly improves upon
the state of the art for multi-hypothesis pose estimation for simple poses and
outperforms it by a large margin for highly ambiguous poses.
| true | true |
Holmquist, Karl and Wandt, Bastian
| 2,023 | null | null | null | null |
DiffPose: Multi-hypothesis Human Pose Estimation using Diffusion models
|
Multi-hypothesis Human Pose Estimation using Diffusion models
|
https://arxiv.org/abs/2211.16487
|
We propose \emph{DiffPose}, a conditional diffusion model, that predicts multiple hypotheses for a given input image.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.