parent_paper_title
stringclasses 63
values | parent_paper_arxiv_id
stringclasses 63
values | citation_shorthand
stringlengths 2
56
| raw_citation_text
stringlengths 9
63
| cited_paper_title
stringlengths 5
161
| cited_paper_arxiv_link
stringlengths 32
37
⌀ | cited_paper_abstract
stringlengths 406
1.92k
⌀ | has_metadata
bool 1
class | is_arxiv_paper
bool 2
classes | bib_paper_authors
stringlengths 2
2.44k
⌀ | bib_paper_year
float64 1.97k
2.03k
⌀ | bib_paper_month
stringclasses 16
values | bib_paper_url
stringlengths 20
116
⌀ | bib_paper_doi
stringclasses 269
values | bib_paper_journal
stringlengths 3
148
⌀ | original_title
stringlengths 5
161
| search_res_title
stringlengths 4
122
| search_res_url
stringlengths 22
267
| search_res_content
stringlengths 19
1.92k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
min2022rethinking
|
\cite{min2022rethinking}
|
Rethinking the Role of Demonstrations: What Makes In-Context Learning
Work?
|
http://arxiv.org/abs/2202.12837v2
|
Large language models (LMs) are able to in-context learn -- perform a new
task via inference alone by conditioning on a few input-label pairs
(demonstrations) and making predictions for new inputs. However, there has been
little understanding of how the model learns and which aspects of the
demonstrations contribute to end task performance. In this paper, we show that
ground truth demonstrations are in fact not required -- randomly replacing
labels in the demonstrations barely hurts performance on a range of
classification and multi-choce tasks, consistently over 12 different models
including GPT-3. Instead, we find that other aspects of the demonstrations are
the key drivers of end task performance, including the fact that they provide a
few examples of (1) the label space, (2) the distribution of the input text,
and (3) the overall format of the sequence. Together, our analysis provides a
new way of understanding how and why in-context learning works, while opening
up new questions about how much can be learned from large language models
through inference alone.
| true | true |
Min, Sewon and Lyu, Xinxi and Holtzman, Ari and Artetxe, Mikel and Lewis, Mike and Hajishirzi, Hannaneh and Zettlemoyer, Luke
| 2,022 | null | null | null |
arXiv preprint arXiv:2202.12837
|
Rethinking the Role of Demonstrations: What Makes In-Context Learning
Work?
|
[PDF] What Makes In-Context Learning Work? - ACL Anthology
|
https://aclanthology.org/2022.emnlp-main.759.pdf
|
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? Large language models (LMs) are able to in- context learn—perform a new task via
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
kang2024context
|
\cite{kang2024context}
|
In-Context Learning with Noisy Labels
|
http://arxiv.org/abs/2411.19581v1
|
In-context learning refers to the emerging ability of large language models
(LLMs) to perform a target task without additional training, utilizing
demonstrations of the task. Recent studies aim to enhance in-context learning
performance by selecting more useful demonstrations. However, they overlook the
presence of inevitable noisy labels in task demonstrations that arise during
the labeling process in the real-world. In this paper, we propose a new task,
in-context learning with noisy labels, which aims to solve real-world problems
for in-context learning where labels in task demonstrations would be corrupted.
Moreover, we propose a new method and baseline methods for the new task,
inspired by studies in learning with noisy labels. Through experiments, we
demonstrate that our proposed method can serve as a safeguard against
performance degradation in in-context learning caused by noisy labels.
| true | true |
Kang, Junyong and Son, Donghyun and Song, Hwanjun and Chang, Buru
| 2,024 | null | null | null |
arXiv preprint arXiv:2411.19581
|
In-Context Learning with Noisy Labels
|
[2411.19581] In-Context Learning with Noisy Labels - arXiv
|
https://arxiv.org/abs/2411.19581
|
In this paper, we propose a new task, in-context learning with noisy labels, which aims to solve real-world problems for in-context learning.
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
gao2024noise
|
\cite{gao2024noise}
|
On the Noise Robustness of In-Context Learning for Text Generation
|
http://arxiv.org/abs/2405.17264v3
|
Large language models (LLMs) have shown impressive performance on downstream
tasks by in-context learning (ICL), which heavily relies on the quality of
demonstrations selected from a large set of annotated examples. Recent works
claim that in-context learning is robust to noisy demonstrations in text
classification. In this work, we show that, on text generation tasks, noisy
annotations significantly hurt the performance of in-context learning. To
circumvent the issue, we propose a simple and effective approach called Local
Perplexity Ranking (LPR), which replaces the "noisy" candidates with their
nearest neighbors that are more likely to be clean. Our method is motivated by
analyzing the perplexity deviation caused by noisy labels and decomposing
perplexity into inherent perplexity and matching perplexity. Our key idea
behind LPR is thus to decouple the matching perplexity by performing the
ranking among the neighbors in semantic space. Our approach can prevent the
selected demonstrations from including mismatched input-label pairs while
preserving the effectiveness of the original selection methods. Extensive
experiments demonstrate the effectiveness of LPR, improving the EM score by up
to 18.75 on common benchmarks with noisy annotations. Our code is available at
https://github.com/ml-stat-Sustech/Local-Perplexity-Ranking.
| true | true |
Gao, Hongfu and Zhang, Feipeng and Jiang, Wenyu and Shu, Jun and Zheng, Feng and Wei, Hongxin
| 2,024 | null | null | null | null |
On the Noise Robustness of In-Context Learning for Text Generation
|
On the Noise Robustness of In-Context Learning for Text ...
|
https://openreview.net/forum?id=00uVk06eVK&referrer=%5Bthe%20profile%20of%20Hongxin%20Wei%5D(%2Fprofile%3Fid%3D~Hongxin_Wei1)
|
The paper "On the Noise Robustness of In-Context Learning for Text Generation" investigates how LLMs handle noisy annotations during in-context
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
li2022contrastive
|
\cite{li2022contrastive}
|
Contrastive Decoding: Open-ended Text Generation as Optimization
|
http://arxiv.org/abs/2210.15097v2
|
Given a language model (LM), maximum probability is a poor decoding objective
for open-ended generation, because it produces short and repetitive text. On
the other hand, sampling can often produce incoherent text that drifts from the
original topics. We propose contrastive decoding (CD), a reliable decoding
approach that optimizes a contrastive objective subject to a plausibility
constraint. The contrastive objective returns the difference between the
likelihood under a large LM (called the expert, e.g. OPT-13B) and a small LM
(called the amateur, e.g. OPT-125M), and the constraint ensures that the
outputs are plausible. CD is inspired by the fact that the failures of larger
LMs (e.g., repetition, incoherence) are even more prevalent in smaller LMs, and
that this difference signals which texts should be preferred. CD requires zero
additional training, and produces higher quality text than decoding from the
larger LM alone. It also works across model scales (OPT-13B and GPT2-1.5B) and
significantly outperforms four strong decoding algorithms (e.g., nucleus,
top-k) in automatic and human evaluations across wikipedia, news and story
domains.
| true | true |
Li, Xiang Lisa and Holtzman, Ari and Fried, Daniel and Liang, Percy and Eisner, Jason and Hashimoto, Tatsunori and Zettlemoyer, Luke and Lewis, Mike
| 2,022 | null | null | null |
arXiv preprint arXiv:2210.15097
|
Contrastive Decoding: Open-ended Text Generation as Optimization
|
Contrastive Decoding: Open-ended Text Generation as Optimization
|
https://arxiv.org/abs/2210.15097
|
We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint.
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
zhao2024enhancing
|
\cite{zhao2024enhancing}
|
Enhancing Contextual Understanding in Large Language Models through
Contrastive Decoding
|
http://arxiv.org/abs/2405.02750v1
|
Large language models (LLMs) tend to inadequately integrate input context
during text generation, relying excessively on encoded prior knowledge in model
parameters, potentially resulting in generated text with factual
inconsistencies or contextually unfaithful content. LLMs utilize two primary
knowledge sources: 1) prior (parametric) knowledge from pretraining, and 2)
contextual (non-parametric) knowledge from input prompts. The study addresses
the open question of how LLMs effectively balance these knowledge sources
during the generation process, specifically in the context of open-domain
question answering. To address this issue, we introduce a novel approach
integrating contrastive decoding with adversarial irrelevant passages as
negative samples to enhance robust context grounding during generation.
Notably, our method operates at inference time without requiring further
training. We conduct comprehensive experiments to demonstrate its applicability
and effectiveness, providing empirical evidence showcasing its superiority over
existing methodologies. Our code is publicly available at:
https://github.com/amazon-science/ContextualUnderstanding-ContrastiveDecoding.
| true | true |
Zhao, Zheng and Monti, Emilio and Lehmann, Jens and Assem, Haytham
| 2,024 | null | null | null |
arXiv preprint arXiv:2405.02750
|
Enhancing Contextual Understanding in Large Language Models through
Contrastive Decoding
|
Enhancing Contextual Understanding in Large Language Models ...
|
https://aclanthology.org/2024.naacl-long.237/
|
We introduce a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
fei2023mitigating
|
\cite{fei2023mitigating}
|
Mitigating Label Biases for In-context Learning
|
http://arxiv.org/abs/2305.19148v3
|
Various design settings for in-context learning (ICL), such as the choice and
order of the in-context examples, can bias a model toward a particular
prediction without being reflective of an understanding of the task. While many
studies discuss these design choices, there have been few systematic
investigations into categorizing them and mitigating their impact. In this
work, we define a typology for three types of label biases in ICL for text
classification: vanilla-label bias, context-label bias, and domain-label bias
(which we conceptualize and detect for the first time).
Our analysis demonstrates that prior label bias calibration methods fall
short of addressing all three types of biases. Specifically, domain-label bias
restricts LLMs to random-level performance on many tasks regardless of the
choice of in-context examples. To mitigate the effect of these biases, we
propose a simple bias calibration method that estimates a language model's
label bias using random in-domain words from the task corpus. After controlling
for this estimated bias when making predictions, our novel domain-context
calibration significantly improves the ICL performance of GPT-J and GPT-3 on a
wide range of tasks. The gain is substantial on tasks with large domain-label
bias (up to 37% in Macro-F1). Furthermore, our results generalize to models
with different scales, pretraining methods, and manually-designed task
instructions, showing the prevalence of label biases in ICL.
| true | true |
Fei, Yu and Hou, Yifan and Chen, Zeming and Bosselut, Antoine
| 2,023 | null | null | null |
arXiv preprint arXiv:2305.19148
|
Mitigating Label Biases for In-context Learning
|
[2305.19148] Mitigating Label Biases for In-context Learning - arXiv
|
https://arxiv.org/abs/2305.19148
|
In this work, we define a typology for three types of label biases in ICL for text classification: vanilla-label bias, context-label bias, and domain-label
|
Dual Debiasing for Noisy In-Context Learning for Text Generation
|
2506.00418v1
|
zhao2021calibrate
|
\cite{zhao2021calibrate}
|
Calibrate Before Use: Improving Few-Shot Performance of Language Models
|
http://arxiv.org/abs/2102.09690v2
|
GPT-3 can perform numerous tasks when provided a natural language prompt that
contains a few training examples. We show that this type of few-shot learning
can be unstable: the choice of prompt format, training examples, and even the
order of the training examples can cause accuracy to vary from near chance to
near state-of-the-art. We demonstrate that this instability arises from the
bias of language models towards predicting certain answers, e.g., those that
are placed near the end of the prompt or are common in the pre-training data.
To mitigate this, we first estimate the model's bias towards each answer by
asking for its prediction when given the training prompt and a content-free
test input such as "N/A". We then fit calibration parameters that cause the
prediction for this input to be uniform across answers. On a diverse set of
tasks, this contextual calibration procedure substantially improves GPT-3 and
GPT-2's average accuracy (up to 30.0% absolute) and reduces variance across
different choices of the prompt.
| true | true |
Zhao, Zihao and Wallace, Eric and Feng, Shi and Klein, Dan and Singh, Sameer
| 2,021 | null | null | null | null |
Calibrate Before Use: Improving Few-Shot Performance of Language Models
|
Calibrate Before Use: Improving Few-Shot Performance of ...
|
http://proceedings.mlr.press/v139/zhao21c/zhao21c.pdf
|
by Z Zhao · 2021 · Cited by 1608 — Overall, contextual calibration is a simple method that makes language models better few-shot learners: it enables end users to obtain higher accuracy with.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
NIPS2013_9aa42b31
|
\cite{NIPS2013_9aa42b31}
|
Distributed Representations of Words and Phrases and their
Compositionality
|
http://arxiv.org/abs/1310.4546v1
|
The recently introduced continuous Skip-gram model is an efficient method for
learning high-quality distributed vector representations that capture a large
number of precise syntactic and semantic word relationships. In this paper we
present several extensions that improve both the quality of the vectors and the
training speed. By subsampling of the frequent words we obtain significant
speedup and also learn more regular word representations. We also describe a
simple alternative to the hierarchical softmax called negative sampling. An
inherent limitation of word representations is their indifference to word order
and their inability to represent idiomatic phrases. For example, the meanings
of "Canada" and "Air" cannot be easily combined to obtain "Air Canada".
Motivated by this example, we present a simple method for finding phrases in
text, and show that learning good vector representations for millions of
phrases is possible.
| true | true |
Tom{\'{a}}s Mikolov and
Ilya Sutskever and
Kai Chen and
Gregory S. Corrado and
Jeffrey Dean
| 2,013 | null |
https://proceedings.neurips.cc/paper/2013/hash/9aa42b31882ec039965f3c4923ce901b-Abstract.html
| null | null |
Distributed Representations of Words and Phrases and their
Compositionality
|
[PDF] Distributed Representations of Words and Phrases and their ...
|
https://proceedings.neurips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf
|
Distributed representations of words use vector spaces to group similar words, capturing syntactic and semantic relationships, and are limited by their
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
pennington-etal-2014-glove
|
\cite{pennington-etal-2014-glove}
|
Glove: Global Vectors for Word Representation
| null | null | true | false |
Jeffrey Pennington and
Richard Socher and
Christopher D. Manning
| 2,014 | null |
https://doi.org/10.3115/v1/d14-1162
|
10.3115/V1/D14-1162
| null |
Glove: Global Vectors for Word Representation
|
GloVe: Global Vectors for Word Representation
|
https://nlp.stanford.edu/projects/glove/
|
GloVe: Global Vectors for Word Representation GloVe: Global Vectors for Word RepresentationJeffrey Pennington, Richard Socher, Christopher D. GloVe: Global Vectors for Word Representation. GloVe is designed in order that such vector differences capture as much as possible the meaning specified by the juxtaposition of two words. The GloVe model is trained on the non-zero entries of a global word-word co-occurrence matrix, which tabulates how frequently words co-occur with one another in a given corpus. The training objective of GloVe is to learn word vectors such that their dot product equals the logarithm of the words' probability of co-occurrence. This feature is not unique to GloVe -- in fact, I'm unaware of any model for word vector learning that avoids this issue.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
transformer
|
\cite{transformer}
|
Attention Is All You Need
|
http://arxiv.org/abs/1706.03762v7
|
The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data.
| true | true |
Ashish Vaswani and
Noam Shazeer and
Niki Parmar and
Jakob Uszkoreit and
Llion Jones and
Aidan N. Gomez and
Lukasz Kaiser and
Illia Polosukhin
| 2,017 | null |
https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
| null | null |
Attention Is All You Need
|
Attention Is All You Need
|
http://arxiv.org/pdf/1706.03762v7
|
The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
devlin-etal-2019-bert
|
\cite{devlin-etal-2019-bert}
|
BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding
|
http://arxiv.org/abs/1810.04805v2
|
We introduce a new language representation model called BERT, which stands
for Bidirectional Encoder Representations from Transformers. Unlike recent
language representation models, BERT is designed to pre-train deep
bidirectional representations from unlabeled text by jointly conditioning on
both left and right context in all layers. As a result, the pre-trained BERT
model can be fine-tuned with just one additional output layer to create
state-of-the-art models for a wide range of tasks, such as question answering
and language inference, without substantial task-specific architecture
modifications.
BERT is conceptually simple and empirically powerful. It obtains new
state-of-the-art results on eleven natural language processing tasks, including
pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI
accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering
Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1
(5.1 point absolute improvement).
| true | true |
Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova
| 2,019 | null |
https://doi.org/10.18653/v1/n19-1423
|
10.18653/V1/N19-1423
| null |
BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding
|
[PDF] BERT: Pre-training of Deep Bidirectional Transformers for Language ...
|
https://aclanthology.org/N19-1423.pdf
|
Unlike recent language repre-sentation models (Peters et al., 2018a; Rad-ford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a re-sult, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. More recently, sentence or document encoders which produce contextual token representations have been pre-trained from unlabeled text and fine-tuned for a supervised downstream task (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018).
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
cer-etal-2018-universal
|
\cite{cer-etal-2018-universal}
|
Universal Sentence Encoder for English
| null | null | true | false |
Daniel Cer and
Yinfei Yang and
Sheng{-}yi Kong and
Nan Hua and
Nicole Limtiaco and
Rhomni St. John and
Noah Constant and
Mario Guajardo{-}Cespedes and
Steve Yuan and
Chris Tar and
Brian Strope and
Ray Kurzweil
| 2,018 | null |
https://doi.org/10.18653/v1/d18-2029
|
10.18653/V1/D18-2029
| null |
Universal Sentence Encoder for English
|
[1803.11175] Universal Sentence Encoder - arXiv
|
https://arxiv.org/abs/1803.11175
|
We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
reimers-gurevych-2019-sentence
|
\cite{reimers-gurevych-2019-sentence}
|
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
|
http://arxiv.org/abs/1908.10084v1
|
BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new
state-of-the-art performance on sentence-pair regression tasks like semantic
textual similarity (STS). However, it requires that both sentences are fed into
the network, which causes a massive computational overhead: Finding the most
similar pair in a collection of 10,000 sentences requires about 50 million
inference computations (~65 hours) with BERT. The construction of BERT makes it
unsuitable for semantic similarity search as well as for unsupervised tasks
like clustering.
In this publication, we present Sentence-BERT (SBERT), a modification of the
pretrained BERT network that use siamese and triplet network structures to
derive semantically meaningful sentence embeddings that can be compared using
cosine-similarity. This reduces the effort for finding the most similar pair
from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while
maintaining the accuracy from BERT.
We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning
tasks, where it outperforms other state-of-the-art sentence embeddings methods.
| true | true |
Nils Reimers and
Iryna Gurevych
| 2,019 | null |
https://doi.org/10.18653/v1/D19-1410
|
10.18653/V1/D19-1410
| null |
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
|
[PDF] Sentence Embeddings using Siamese BERT-Networks
|
https://aclanthology.org/D19-1410.pdf
|
c ⃝2019 Association for Computational Linguistics 3982 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨ at Darmstadt www.ukp.tu-darmstadt.de Abstract BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). We fine-tune SBERT on NLI data, which cre-ates sentence embeddings that significantly out-perform other state-of-the-art sentence embedding methods like InferSent (Conneau et al., 2017) and Universal Sentence Encoder (Cer et al., 2018).
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
gao-etal-2021-simcse
|
\cite{gao-etal-2021-simcse}
|
SimCSE: Simple Contrastive Learning of Sentence Embeddings
|
http://arxiv.org/abs/2104.08821v4
|
This paper presents SimCSE, a simple contrastive learning framework that
greatly advances state-of-the-art sentence embeddings. We first describe an
unsupervised approach, which takes an input sentence and predicts itself in a
contrastive objective, with only standard dropout used as noise. This simple
method works surprisingly well, performing on par with previous supervised
counterparts. We find that dropout acts as minimal data augmentation, and
removing it leads to a representation collapse. Then, we propose a supervised
approach, which incorporates annotated pairs from natural language inference
datasets into our contrastive learning framework by using "entailment" pairs as
positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on
standard semantic textual similarity (STS) tasks, and our unsupervised and
supervised models using BERT base achieve an average of 76.3% and 81.6%
Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to
the previous best results. We also show -- both theoretically and empirically
-- that the contrastive learning objective regularizes pre-trained embeddings'
anisotropic space to be more uniform, and it better aligns positive pairs when
supervised signals are available.
| true | true |
Tianyu Gao and
Xingcheng Yao and
Danqi Chen
| 2,021 | null |
https://doi.org/10.18653/v1/2021.emnlp-main.552
| null | null |
SimCSE: Simple Contrastive Learning of Sentence Embeddings
|
SimCSE: Simple Contrastive Learning of Sentence Embeddings
|
http://arxiv.org/pdf/2104.08821v4
|
This paper presents SimCSE, a simple contrastive learning framework that
greatly advances state-of-the-art sentence embeddings. We first describe an
unsupervised approach, which takes an input sentence and predicts itself in a
contrastive objective, with only standard dropout used as noise. This simple
method works surprisingly well, performing on par with previous supervised
counterparts. We find that dropout acts as minimal data augmentation, and
removing it leads to a representation collapse. Then, we propose a supervised
approach, which incorporates annotated pairs from natural language inference
datasets into our contrastive learning framework by using "entailment" pairs as
positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on
standard semantic textual similarity (STS) tasks, and our unsupervised and
supervised models using BERT base achieve an average of 76.3% and 81.6%
Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to
the previous best results. We also show -- both theoretically and empirically
-- that the contrastive learning objective regularizes pre-trained embeddings'
anisotropic space to be more uniform, and it better aligns positive pairs when
supervised signals are available.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
zhuo-etal-2023-whitenedcse
|
\cite{zhuo-etal-2023-whitenedcse}
|
WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings
| null | null | true | false |
Wenjie Zhuo and
Yifan Sun and
Xiaohan Wang and
Linchao Zhu and
Yi Yang
| 2,023 | null |
https://doi.org/10.18653/v1/2023.acl-long.677
|
10.18653/V1/2023.ACL-LONG.677
| null |
WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings
|
Whitening-based Contrastive Learning of Sentence Embeddings
|
https://aclanthology.org/2023.acl-long.677/
|
This paper presents a whitening-based contrastive learning method for sentence embedding learning (WhitenedCSE), which combines contrastive learning with a
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
wang2023improving
|
\cite{wang2023improving}
|
Improving Text Embeddings with Large Language Models
|
http://arxiv.org/abs/2401.00368v3
|
In this paper, we introduce a novel and simple method for obtaining
high-quality text embeddings using only synthetic data and less than 1k
training steps. Unlike existing methods that often depend on multi-stage
intermediate pre-training with billions of weakly-supervised text pairs,
followed by fine-tuning with a few labeled datasets, our method does not
require building complex training pipelines or relying on manually collected
datasets that are often constrained by task diversity and language coverage. We
leverage proprietary LLMs to generate diverse synthetic data for hundreds of
thousands of text embedding tasks across 93 languages. We then fine-tune
open-source decoder-only LLMs on the synthetic data using standard contrastive
loss. Experiments demonstrate that our method achieves strong performance on
highly competitive text embedding benchmarks without using any labeled data.
Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our
model sets new state-of-the-art results on the BEIR and MTEB benchmarks.
| true | true |
Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu
| 2,023 | null |
https://doi.org/10.48550/arXiv.2401.00368
| null |
arXiv
|
Improving Text Embeddings with Large Language Models
|
Improving Text Embeddings with Large Language Models
|
http://arxiv.org/pdf/2401.00368v3
|
In this paper, we introduce a novel and simple method for obtaining
high-quality text embeddings using only synthetic data and less than 1k
training steps. Unlike existing methods that often depend on multi-stage
intermediate pre-training with billions of weakly-supervised text pairs,
followed by fine-tuning with a few labeled datasets, our method does not
require building complex training pipelines or relying on manually collected
datasets that are often constrained by task diversity and language coverage. We
leverage proprietary LLMs to generate diverse synthetic data for hundreds of
thousands of text embedding tasks across 93 languages. We then fine-tune
open-source decoder-only LLMs on the synthetic data using standard contrastive
loss. Experiments demonstrate that our method achieves strong performance on
highly competitive text embedding benchmarks without using any labeled data.
Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our
model sets new state-of-the-art results on the BEIR and MTEB benchmarks.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
muennighoff2024generative
|
\cite{muennighoff2024generative}
|
Generative Representational Instruction Tuning
|
http://arxiv.org/abs/2402.09906v3
|
All text-based language problems can be reduced to either generation or
embedding. Current models only perform well at one or the other. We introduce
generative representational instruction tuning (GRIT) whereby a large language
model is trained to handle both generative and embedding tasks by
distinguishing between them through instructions. Compared to other open
models, our resulting GritLM 7B sets a new state of the art on the Massive Text
Embedding Benchmark (MTEB) and outperforms all models up to its size on a range
of generative tasks. By scaling up further, GritLM 8x7B outperforms all open
generative language models that we tried while still being among the best
embedding models. Notably, we find that GRIT matches training on only
generative or embedding data, thus we can unify both at no performance loss.
Among other benefits, the unification via GRIT speeds up Retrieval-Augmented
Generation (RAG) by > 60% for long documents, by no longer requiring separate
retrieval and generation models. Models, code, etc. are freely available at
https://github.com/ContextualAI/gritlm.
| true | true |
Niklas Muennighoff and
Hongjin Su and
Liang Wang and
Nan Yang and
Furu Wei and
Tao Yu and
Amanpreet Singh and
Douwe Kiela
| 2,025 | null |
https://openreview.net/forum?id=BC4lIvfSzv
| null | null |
Generative Representational Instruction Tuning
|
Generative Representational Instruction Tuning
|
http://arxiv.org/pdf/2402.09906v3
|
All text-based language problems can be reduced to either generation or
embedding. Current models only perform well at one or the other. We introduce
generative representational instruction tuning (GRIT) whereby a large language
model is trained to handle both generative and embedding tasks by
distinguishing between them through instructions. Compared to other open
models, our resulting GritLM 7B sets a new state of the art on the Massive Text
Embedding Benchmark (MTEB) and outperforms all models up to its size on a range
of generative tasks. By scaling up further, GritLM 8x7B outperforms all open
generative language models that we tried while still being among the best
embedding models. Notably, we find that GRIT matches training on only
generative or embedding data, thus we can unify both at no performance loss.
Among other benefits, the unification via GRIT speeds up Retrieval-Augmented
Generation (RAG) by > 60% for long documents, by no longer requiring separate
retrieval and generation models. Models, code, etc. are freely available at
https://github.com/ContextualAI/gritlm.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
lei-etal-2024-meta
|
\cite{lei-etal-2024-meta}
|
Meta-Task Prompting Elicits Embeddings from Large Language Models
|
http://arxiv.org/abs/2402.18458v2
|
We introduce a new unsupervised text embedding method, Meta-Task Prompting
with Explicit One-Word Limitation (MetaEOL), for generating high-quality
sentence embeddings from Large Language Models (LLMs) without the need for
model fine-tuning. Leveraging meta-task prompting, MetaEOL guides LLMs to
produce embeddings through a series of carefully designed prompts that address
multiple representational aspects. Our comprehensive experiments demonstrate
that embeddings averaged from various meta-tasks are versatile embeddings that
yield competitive performance on Semantic Textual Similarity (STS) benchmarks
and excel in downstream tasks, surpassing contrastive-trained models. Our
findings suggest a new scaling law, offering a versatile and resource-efficient
approach for embedding generation across diverse scenarios.
| true | true |
Yibin Lei and
Di Wu and
Tianyi Zhou and
Tao Shen and
Yu Cao and
Chongyang Tao and
Andrew Yates
| 2,024 | null |
https://doi.org/10.18653/v1/2024.acl-long.546
|
10.18653/V1/2024.ACL-LONG.546
| null |
Meta-Task Prompting Elicits Embeddings from Large Language Models
|
[PDF] Meta-Task Prompting Elicits Embeddings from Large Language ...
|
https://aclanthology.org/2024.acl-long.546.pdf
|
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10141–10157 August 11-16, 2024 ©2024 Association for Computational Linguistics Meta-Task Prompting Elicits Embeddings from Large Language Models Yibin Lei1*, Di Wu1, Tianyi Zhou2, Tao Shen3, Yu Cao4, Chongyang Tao5*, Andrew Yates1 1University of Amsterdam 2University of Maryland 3AAII, FEIT, University of Technology Sydney 4Tencent IEG 5Microsoft Corporation {y.lei, d.wu, a.c.yates}@uva.nl, [email protected] [email protected], [email protected], [email protected] Abstract We introduce a new unsupervised text embed-ding method, Meta-Task Prompting with Ex-plicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) with-out the need for model fine-tuning.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
li-li-2024-aoe
|
\cite{li-li-2024-aoe}
|
AoE: Angle-optimized Embeddings for Semantic Textual Similarity
| null | null | true | false |
Xianming Li and
Jing Li
| 2,024 | null |
https://doi.org/10.18653/v1/2024.acl-long.101
|
10.18653/V1/2024.ACL-LONG.101
| null |
AoE: Angle-optimized Embeddings for Semantic Textual Similarity
|
AoE: Angle-optimized Embeddings for Semantic Textual Similarity
|
https://aclanthology.org/2024.acl-long.101/
|
We propose a novel Angle-optimized Embedding model, AoE. It optimizes angle differences in complex space to explore similarity in saturation zones better.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
su-etal-2023-one
|
\cite{su-etal-2023-one}
|
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
|
http://arxiv.org/abs/2212.09741v3
|
We introduce INSTRUCTOR, a new method for computing text embeddings given
task instructions: every text input is embedded together with instructions
explaining the use case (e.g., task and domain descriptions). Unlike encoders
from prior work that are more specialized, INSTRUCTOR is a single embedder that
can generate text embeddings tailored to different downstream tasks and
domains, without any further training. We first annotate instructions for 330
diverse tasks and train INSTRUCTOR on this multitask mixture with a contrastive
loss. We evaluate INSTRUCTOR on 70 embedding evaluation tasks (66 of which are
unseen during training), ranging from classification and information retrieval
to semantic textual similarity and text generation evaluation. INSTRUCTOR,
while having an order of magnitude fewer parameters than the previous best
model, achieves state-of-the-art performance, with an average improvement of
3.4% compared to the previous best results on the 70 diverse datasets. Our
analysis suggests that INSTRUCTOR is robust to changes in instructions, and
that instruction finetuning mitigates the challenge of training a single model
on diverse datasets. Our model, code, and data are available at
https://instructor-embedding.github.io.
| true | true |
Su, Hongjin and
Shi, Weijia and
Kasai, Jungo and
Wang, Yizhong and
Hu, Yushi and
Ostendorf, Mari and
Yih, Wen-tau and
Smith, Noah A. and
Zettlemoyer, Luke and
Yu, Tao
| 2,023 | null |
https://aclanthology.org/2023.findings-acl.71/
| null | null |
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
|
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
|
https://aclanthology.org/2023.findings-acl.71/
|
Anthology ID:2023.findings-acl.71 Volume:Findings of the Association for Computational Linguistics: ACL 2023Month:July Year:2023 Address:Toronto, Canada Editors:Anna Rogers, Jordan Boyd-Graber, Naoaki OkazakiVenue:FindingsSIG:Publisher:Association for Computational Linguistics Note:Pages:1102–1121 Language:URL:https://aclanthology.org/2023.findings-acl.71/DOI:10.18653/v1/2023.findings-acl.71Bibkey:su-etal-2023-one Cite (ACL):Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Association for Computational Linguistics.Cite (Informal):One Embedder, Any Task: Instruction-Finetuned Text Embeddings (Su et al., Findings 2023)Copy Citation:BibTeX Markdown MODS XML Endnote More options…PDF:https://aclanthology.org/2023.findings-acl.71.pdfVideo:https://aclanthology.org/2023.findings-acl.71.mp4 abstract = "We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). <abstract>We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions).
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
peng-etal-2024-answer
|
\cite{peng-etal-2024-answer}
|
Answer is All You Need: Instruction-following Text Embedding via
Answering the Question
|
http://arxiv.org/abs/2402.09642v1
|
This work aims to build a text embedder that can capture characteristics of
texts specified by user instructions. Despite its tremendous potential to
deploy user-oriented embeddings, none of previous approaches provides a
concrete solution for it. This paper offers a new viewpoint, which treats the
instruction as a question about the input text and encodes the expected answers
to obtain the representation accordingly. Intuitively, texts with the same
(implicit) semantics would share similar answers following the instruction,
thus leading to more similar embeddings. Specifically, we propose InBedder that
instantiates this embed-via-answering idea by only fine-tuning language models
on abstractive question answering tasks. InBedder demonstrates significantly
improved instruction-following capabilities according to our proposed
instruction awareness tests and instruction robustness tests, when applied to
both large language models (LLMs) (e.g., llama-2-7b) and smaller encoder-based
LMs (e.g., roberta-large). Additionally, our qualitative analysis of clustering
outcomes, achieved by applying different instructions to the same corpus,
demonstrates a high degree of interpretability.
| true | true |
Letian Peng and
Yuwei Zhang and
Zilong Wang and
Jayanth Srinivasa and
Gaowen Liu and
Zihan Wang and
Jingbo Shang
| 2,024 | null |
https://doi.org/10.18653/v1/2024.acl-long.27
|
10.18653/V1/2024.ACL-LONG.27
| null |
Answer is All You Need: Instruction-following Text Embedding via
Answering the Question
|
Answer is All You Need: Instruction-following Text ...
|
https://aclanthology.org/2024.acl-long.27/
|
by L Peng · 2024 · Cited by 11 — This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion.See more
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
weller2024promptriever
|
\cite{weller2024promptriever}
|
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like
Language Models
|
http://arxiv.org/abs/2409.11136v1
|
Instruction-tuned language models (LM) are able to respond to imperative
commands, providing a more natural user interface compared to their base
counterparts. In this work, we present Promptriever, the first retrieval model
able to be prompted like an LM. To train Promptriever, we curate and release a
new instance-level instruction training set from MS MARCO, spanning nearly 500k
instances. Promptriever not only achieves strong performance on standard
retrieval tasks, but also follows instructions. We observe: (1) large gains
(reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR /
+3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical
choices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR),
and (3) the ability to perform hyperparameter search via prompting to reliably
improve retrieval performance (+1.4 average increase on BEIR). Promptriever
demonstrates that retrieval models can be controlled with prompts on a
per-query basis, setting the stage for future work aligning LM prompting
techniques with information retrieval.
| true | true |
Orion Weller and
Benjamin Van Durme and
Dawn J. Lawrie and
Ashwin Paranjape and
Yuhao Zhang and
Jack Hessel
| 2,025 | null |
https://openreview.net/forum?id=odvSjn416y
| null | null |
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like
Language Models
|
Promptriever: Instruction-Trained Retrievers Can Be ...
|
https://openreview.net/forum?id=odvSjn416y
|
by O Weller · Cited by 29 — This paper introduces Promptriever, a retrieval model that can be prompted like a language model. The authors construct an instance-level instruction training
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
min2024unihgkr
|
\cite{min2024unihgkr}
|
UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers
|
http://arxiv.org/abs/2410.20163v2
|
Existing information retrieval (IR) models often assume a homogeneous
structure for knowledge sources and user queries, limiting their applicability
in real-world settings where retrieval is inherently heterogeneous and diverse.
In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous
knowledge retriever that (1) builds a unified retrieval space for heterogeneous
knowledge and (2) follows diverse user instructions to retrieve knowledge of
specified types. UniHGKR consists of three principal stages: heterogeneous
self-supervised pretraining, text-anchored embedding alignment, and
instruction-aware retriever fine-tuning, enabling it to generalize across
varied retrieval contexts. This framework is highly scalable, with a BERT-based
version and a UniHGKR-7B version trained on large language models. Also, we
introduce CompMix-IR, the first native heterogeneous knowledge retrieval
benchmark. It includes two retrieval scenarios with various instructions, over
9,400 question-answer (QA) pairs, and a corpus of 10 million entries, covering
four different types of data. Extensive experiments show that UniHGKR
consistently outperforms state-of-the-art methods on CompMix-IR, achieving up
to 6.36% and 54.23% relative improvements in two scenarios, respectively.
Finally, by equipping our retriever for open-domain heterogeneous QA systems,
we achieve a new state-of-the-art result on the popular ConvMix task, with an
absolute improvement of up to 5.90 points.
| true | true |
Dehai Min and
Zhiyang Xu and
Guilin Qi and
Lifu Huang and
Chenyu You
| 2,025 | null |
https://aclanthology.org/2025.naacl-long.234/
| null | null |
UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers
|
UniHGKR: Unified Instruction-aware Heterogeneous ...
|
https://arxiv.org/abs/2410.20163
|
by D Min · 2024 · Cited by 2 — In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge retriever that (1) builds a unified retrieval space for heterogeneous
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
oh2024instructir
|
\cite{oh2024instructir}
|
INSTRUCTIR: A Benchmark for Instruction Following of Information
Retrieval Models
|
http://arxiv.org/abs/2402.14334v1
|
Despite the critical need to align search targets with users' intention,
retrievers often only prioritize query information without delving into the
users' intended search context. Enhancing the capability of retrievers to
understand intentions and preferences of users, akin to language model
instructions, has the potential to yield more aligned search targets. Prior
studies restrict the application of instructions in information retrieval to a
task description format, neglecting the broader context of diverse and evolving
search scenarios. Furthermore, the prevailing benchmarks utilized for
evaluation lack explicit tailoring to assess instruction-following ability,
thereby hindering progress in this field. In response to these limitations, we
propose a novel benchmark,INSTRUCTIR, specifically designed to evaluate
instruction-following ability in information retrieval tasks. Our approach
focuses on user-aligned instructions tailored to each query instance,
reflecting the diverse characteristics inherent in real-world search scenarios.
Through experimental analysis, we observe that retrievers fine-tuned to follow
task-style instructions, such as INSTRUCTOR, can underperform compared to their
non-instruction-tuned counterparts. This underscores potential overfitting
issues inherent in constructing retrievers trained on existing
instruction-aware retrieval datasets.
| true | true |
Hanseok Oh and
Hyunji Lee and
Seonghyeon Ye and
Haebin Shin and
Hansol Jang and
Changwook Jun and
Minjoon Seo
| 2,024 | null |
https://doi.org/10.48550/arXiv.2402.14334
|
10.48550/ARXIV.2402.14334
|
arXiv
|
INSTRUCTIR: A Benchmark for Instruction Following of Information
Retrieval Models
|
InstructIR: A Benchmark for Instruction Following of ...
|
https://arxiv.org/html/2402.14334v1
|
Our approach focuses on user-aligned instructions tailored to each query instance, reflecting the diverse characteristics inherent in real-world search scenarios. Moreover, lack of benchmarks to evaluate retrievers on user-aligned scenarios prevents the mature discussions of instruction following in retrieval task. In this work, we introduce a novel benchmark, InstructIR, specifically designed to evaluate instruction-following ability of retrieval models with diverse user-aligned instructions for each query, mirroring real-world search scenarios. Constructing a framework to evaluate instruction-following capabilities in information retrieval models necessitates correlating multiple instructions with the same query and adjusting their targets accordingly (i.e., instruction, query, target text). Therefore, in contrast to previous approaches that evaluate coarse-grained task description-style instructions on information retrieval datasets with up to 15 instructions, we focus on creating per-query, instance-specific instructions as Table 1.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
sun2024mair
|
\cite{sun2024mair}
|
MAIR: A Massive Benchmark for Evaluating Instructed Retrieval
|
http://arxiv.org/abs/2410.10127v1
|
Recent information retrieval (IR) models are pre-trained and
instruction-tuned on massive datasets and tasks, enabling them to perform well
on a wide range of tasks and potentially generalize to unseen tasks with
instructions. However, existing IR benchmarks focus on a limited scope of
tasks, making them insufficient for evaluating the latest IR models. In this
paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a
heterogeneous IR benchmark that includes 126 distinct IR tasks across 6
domains, collected from existing datasets. We benchmark state-of-the-art
instruction-tuned text embedding models and re-ranking models. Our experiments
reveal that instruction-tuned models generally achieve superior performance
compared to non-instruction-tuned models on MAIR. Additionally, our results
suggest that current instruction-tuned text embedding models and re-ranking
models still lack effectiveness in specific long-tail tasks. MAIR is publicly
available at https://github.com/sunnweiwei/Mair.
| true | true |
Weiwei Sun and
Zhengliang Shi and
Wu Long and
Lingyong Yan and
Xinyu Ma and
Yiding Liu and
Min Cao and
Dawei Yin and
Zhaochun Ren
| 2,024 | null |
https://aclanthology.org/2024.emnlp-main.778
| null | null |
MAIR: A Massive Benchmark for Evaluating Instructed Retrieval
|
MAIR: A Massive Benchmark for Evaluating Instructed Retrieval
|
http://arxiv.org/pdf/2410.10127v1
|
Recent information retrieval (IR) models are pre-trained and
instruction-tuned on massive datasets and tasks, enabling them to perform well
on a wide range of tasks and potentially generalize to unseen tasks with
instructions. However, existing IR benchmarks focus on a limited scope of
tasks, making them insufficient for evaluating the latest IR models. In this
paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a
heterogeneous IR benchmark that includes 126 distinct IR tasks across 6
domains, collected from existing datasets. We benchmark state-of-the-art
instruction-tuned text embedding models and re-ranking models. Our experiments
reveal that instruction-tuned models generally achieve superior performance
compared to non-instruction-tuned models on MAIR. Additionally, our results
suggest that current instruction-tuned text embedding models and re-ranking
models still lack effectiveness in specific long-tail tasks. MAIR is publicly
available at https://github.com/sunnweiwei/Mair.
|
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding
based on Guided Space Transformation
|
2505.24754v1
|
weller2024followir
|
\cite{weller2024followir}
|
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow
Instructions
|
http://arxiv.org/abs/2403.15246v3
|
Modern Language Models (LMs) are capable of following long and complex
instructions that enable a large and diverse set of user requests. While
Information Retrieval (IR) models use these LMs as the backbone of their
architectures, virtually none of them allow users to provide detailed
instructions alongside queries, thus limiting their ability to satisfy complex
information needs. In this work, we study the use of instructions in IR
systems. First, we introduce our dataset FollowIR, which contains a rigorous
instruction evaluation benchmark as well as a training set for helping IR
models learn to better follow real-world instructions. FollowIR repurposes
detailed instructions -- also known as narratives -- developed for professional
assessors to evaluate retrieval systems. In particular, we build our benchmark
from three collections curated for shared tasks at the Text REtrieval
Conference (TREC). These collections contains hundreds to thousands of labeled
documents per query, making them suitable for our exploration. Through this
process, we can measure how well IR models follow instructions, through a new
pairwise evaluation framework. Our results indicate that existing retrieval
models fail to correctly use instructions, using them for basic keywords and
struggling to understand long-form information. However, we show that it is
possible for IR models to learn to follow complex instructions: our new
FollowIR-7B model has significant improvements after fine-tuning on our
training set.
| true | true |
Orion Weller and
Benjamin Chang and
Sean MacAvaney and
Kyle Lo and
Arman Cohan and
Benjamin Van Durme and
Dawn J. Lawrie and
Luca Soldaini
| 2,025 | null |
https://aclanthology.org/2025.naacl-long.597/
| null | null |
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow
Instructions
|
FollowIR: Evaluating and Teaching Information Retrieval ...
|
https://arxiv.org/abs/2403.15246
|
by O Weller · 2024 · Cited by 43 — Through this process, we can measure how well IR models follow instructions, through a new pairwise evaluation framework. Our results indicate
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
ladhak-etal-2020-exploring
|
\cite{ladhak-etal-2020-exploring}
|
Exploring Content Selection in Summarization of Novel Chapters
|
http://arxiv.org/abs/2005.01840v3
|
We present a new summarization task, generating summaries of novel chapters
using summary/chapter pairs from online study guides. This is a harder task
than the news summarization task, given the chapter length as well as the
extreme paraphrasing and generalization found in the summaries. We focus on
extractive summarization, which requires the creation of a gold-standard set of
extractive summaries. We present a new metric for aligning reference summary
sentences with chapter sentences to create gold extracts and also experiment
with different alignment methods. Our experiments demonstrate significant
improvement over prior alignment approaches for our task as shown through
automatic metrics and a crowd-sourced pyramid analysis. We make our data
collection scripts available at
https://github.com/manestay/novel-chapter-dataset .
| true | true |
Ladhak, Faisal and
Li, Bryan and
Al-Onaizan, Yaser and
McKeown, Kathleen
| 2,020 | null |
https://aclanthology.org/2020.acl-main.453/
|
10.18653/v1/2020.acl-main.453
| null |
Exploring Content Selection in Summarization of Novel Chapters
|
Exploring Content Selection in Summarization of Novel Chapters
|
http://arxiv.org/pdf/2005.01840v3
|
We present a new summarization task, generating summaries of novel chapters
using summary/chapter pairs from online study guides. This is a harder task
than the news summarization task, given the chapter length as well as the
extreme paraphrasing and generalization found in the summaries. We focus on
extractive summarization, which requires the creation of a gold-standard set of
extractive summaries. We present a new metric for aligning reference summary
sentences with chapter sentences to create gold extracts and also experiment
with different alignment methods. Our experiments demonstrate significant
improvement over prior alignment approaches for our task as shown through
automatic metrics and a crowd-sourced pyramid analysis. We make our data
collection scripts available at
https://github.com/manestay/novel-chapter-dataset .
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
pu-etal-2022-two
|
\cite{pu-etal-2022-two}
|
Two-Stage Movie Script Summarization: An Efficient Method For Low-Resource Long Document Summarization
| null | null | true | false |
Liu, Dongqi and
Hong, Xudong and
Lin, Pin-Jie and
Chang, Ernie and
Demberg, Vera
| 2,022 | null |
https://aclanthology.org/2022.creativesumm-1.9/
| null | null |
Two-Stage Movie Script Summarization: An Efficient Method For Low-Resource Long Document Summarization
|
Two-Stage Movie Script Summarization: An Efficient Method For ...
|
https://scispace.com/papers/two-stage-movie-script-summarization-an-efficient-method-for-2ca5vhpp
|
The core innovation in our model employs a two-stage hierarchical architecture for movie script summarization. In the first stage, a heuristic extraction method
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
gorinski-lapata-2015-movie
|
\cite{gorinski-lapata-2015-movie}
|
Movie Script Summarization as Graph-based Scene Extraction
| null | null | true | false |
Gorinski, Philip John and
Lapata, Mirella
| 2,015 | null |
https://aclanthology.org/N15-1113/
|
10.3115/v1/N15-1113
| null |
Movie Script Summarization as Graph-based Scene Extraction
|
Movie Script Summarization As Graph-Based Scene Extraction | PDF
|
https://www.scribd.com/document/456741694/N15-1113
|
The document discusses summarizing movie scripts by extracting a chain of important scenes. It formalizes script summarization as finding an optimal scene chain
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
saxena-keller-2024-select
|
\cite{saxena-keller-2024-select}
|
Select and Summarize: Scene Saliency for Movie Script Summarization
|
http://arxiv.org/abs/2404.03561v1
|
Abstractive summarization for long-form narrative texts such as movie scripts
is challenging due to the computational and memory constraints of current
language models. A movie script typically comprises a large number of scenes;
however, only a fraction of these scenes are salient, i.e., important for
understanding the overall narrative. The salience of a scene can be
operationalized by considering it as salient if it is mentioned in the summary.
Automatically identifying salient scenes is difficult due to the lack of
suitable datasets. In this work, we introduce a scene saliency dataset that
consists of human-annotated salient scenes for 100 movies. We propose a
two-stage abstractive summarization approach which first identifies the salient
scenes in script and then generates a summary using only those scenes. Using
QA-based evaluation, we show that our model outperforms previous
state-of-the-art summarization methods and reflects the information content of
a movie more accurately than a model that takes the whole movie script as
input.
| true | true |
Saxena, Rohit and
Keller, Frank
| 2,024 | null |
https://aclanthology.org/2024.findings-naacl.218/
|
10.18653/v1/2024.findings-naacl.218
| null |
Select and Summarize: Scene Saliency for Movie Script Summarization
|
Select and Summarize: Scene Saliency for Movie Script Summarization
|
http://arxiv.org/pdf/2404.03561v1
|
Abstractive summarization for long-form narrative texts such as movie scripts
is challenging due to the computational and memory constraints of current
language models. A movie script typically comprises a large number of scenes;
however, only a fraction of these scenes are salient, i.e., important for
understanding the overall narrative. The salience of a scene can be
operationalized by considering it as salient if it is mentioned in the summary.
Automatically identifying salient scenes is difficult due to the lack of
suitable datasets. In this work, we introduce a scene saliency dataset that
consists of human-annotated salient scenes for 100 movies. We propose a
two-stage abstractive summarization approach which first identifies the salient
scenes in script and then generates a summary using only those scenes. Using
QA-based evaluation, we show that our model outperforms previous
state-of-the-art summarization methods and reflects the information content of
a movie more accurately than a model that takes the whole movie script as
input.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
zaheer2020bigbird
|
\cite{zaheer2020bigbird}
|
Big Bird: Transformers for Longer Sequences
|
http://arxiv.org/abs/2007.14062v2
|
Transformers-based models, such as BERT, have been one of the most successful
deep learning models for NLP. Unfortunately, one of their core limitations is
the quadratic dependency (mainly in terms of memory) on the sequence length due
to their full attention mechanism. To remedy this, we propose, BigBird, a
sparse attention mechanism that reduces this quadratic dependency to linear. We
show that BigBird is a universal approximator of sequence functions and is
Turing complete, thereby preserving these properties of the quadratic, full
attention model. Along the way, our theoretical analysis reveals some of the
benefits of having $O(1)$ global tokens (such as CLS), that attend to the
entire sequence as part of the sparse attention mechanism. The proposed sparse
attention can handle sequences of length up to 8x of what was previously
possible using similar hardware. As a consequence of the capability to handle
longer context, BigBird drastically improves performance on various NLP tasks
such as question answering and summarization. We also propose novel
applications to genomics data.
| true | true |
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and Ahmed, Amr
| 2,020 | null |
https://proceedings.neurips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-Paper.pdf
| null | null |
Big Bird: Transformers for Longer Sequences
|
Big Bird: Transformers for Longer Sequences
|
http://arxiv.org/pdf/2007.14062v2
|
Transformers-based models, such as BERT, have been one of the most successful
deep learning models for NLP. Unfortunately, one of their core limitations is
the quadratic dependency (mainly in terms of memory) on the sequence length due
to their full attention mechanism. To remedy this, we propose, BigBird, a
sparse attention mechanism that reduces this quadratic dependency to linear. We
show that BigBird is a universal approximator of sequence functions and is
Turing complete, thereby preserving these properties of the quadratic, full
attention model. Along the way, our theoretical analysis reveals some of the
benefits of having $O(1)$ global tokens (such as CLS), that attend to the
entire sequence as part of the sparse attention mechanism. The proposed sparse
attention can handle sequences of length up to 8x of what was previously
possible using similar hardware. As a consequence of the capability to handle
longer context, BigBird drastically improves performance on various NLP tasks
such as question answering and summarization. We also propose novel
applications to genomics data.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
Beltagy2020Longformer
|
\cite{Beltagy2020Longformer}
|
Longformer: The Long-Document Transformer
|
http://arxiv.org/abs/2004.05150v2
|
Transformer-based models are unable to process long sequences due to their
self-attention operation, which scales quadratically with the sequence length.
To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process
documents of thousands of tokens or longer. Longformer's attention mechanism is
a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work
on long-sequence transformers, we evaluate Longformer on character-level
language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a
variety of downstream tasks. Our pretrained Longformer consistently outperforms
RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop
and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a
Longformer variant for supporting long document generative sequence-to-sequence
tasks, and demonstrate its effectiveness on the arXiv summarization dataset.
| true | true |
Iz Beltagy and Matthew E. Peters and Arman Cohan
| 2,020 | null |
https://arxiv.org/abs/2004.05150
| null | null |
Longformer: The Long-Document Transformer
|
[PDF] Longformer: The Long-Document Transformer
|
https://ysu1989.github.io/courses/au20/cse5539/Longformer.pdf
|
Longformer: The Long-Document Transformer Beltagy et al., 2020 Presented by Leslie Zhou Background ◦Transformers: have achieved state-of-the-art results in a wide range of natural language tasks including generative language modeling and discriminative language understanding. (2019)) ◦Classification (IMDB and Hyperpartisan news detection datasets.1) Result Conclusion Longformer: a transformer-based model that is scalable for processing long documents -Easy to perform a wide range of document-level NLP tasks without chunking/shortening the long input -No complex architecture to combine information across these chunks -Combines local and global information while also scaling linearly with the sequence length -Outperforms RoBERTa on long document tasks Thanks!
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
kitaev2020reformerefficienttransformer
|
\cite{kitaev2020reformerefficienttransformer}
|
Reformer: The Efficient Transformer
|
http://arxiv.org/abs/2001.04451v2
|
Large Transformer models routinely achieve state-of-the-art results on a
number of tasks but training these models can be prohibitively costly,
especially on long sequences. We introduce two techniques to improve the
efficiency of Transformers. For one, we replace dot-product attention by one
that uses locality-sensitive hashing, changing its complexity from O($L^2$) to
O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use
reversible residual layers instead of the standard residuals, which allows
storing activations only once in the training process instead of $N$ times,
where $N$ is the number of layers. The resulting model, the Reformer, performs
on par with Transformer models while being much more memory-efficient and much
faster on long sequences.
| true | true |
Nikita Kitaev and Łukasz Kaiser and Anselm Levskaya
| 2,020 | null |
https://arxiv.org/abs/2001.04451
| null | null |
Reformer: The Efficient Transformer
|
Reformer: The Efficient Transformer
|
http://arxiv.org/pdf/2001.04451v2
|
Large Transformer models routinely achieve state-of-the-art results on a
number of tasks but training these models can be prohibitively costly,
especially on long sequences. We introduce two techniques to improve the
efficiency of Transformers. For one, we replace dot-product attention by one
that uses locality-sensitive hashing, changing its complexity from O($L^2$) to
O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use
reversible residual layers instead of the standard residuals, which allows
storing activations only once in the training process instead of $N$ times,
where $N$ is the number of layers. The resulting model, the Reformer, performs
on par with Transformer models while being much more memory-efficient and much
faster on long sequences.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
guo-etal-2022-longt5
|
\cite{guo-etal-2022-longt5}
|
{L}ong{T}5: {E}fficient Text-To-Text Transformer for Long Sequences
| null | null | true | false |
Guo, Mandy and
Ainslie, Joshua and
Uthus, David and
Ontanon, Santiago and
Ni, Jianmo and
Sung, Yun-Hsuan and
Yang, Yinfei
| 2,022 | null |
https://aclanthology.org/2022.findings-naacl.55/
|
10.18653/v1/2022.findings-naacl.55
| null |
{L}ong{T}5: {E}fficient Text-To-Text Transformer for Long Sequences
|
LongT5: Efficient Text-To-Text Transformer for Long Sequences
|
https://aclanthology.org/2022.findings-naacl.55/
|
In this paper, we present LongT5, a new model that explores the effects of scaling both the input length and model size at the same time.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
wang2020linformerselfattentionlinearcomplexity
|
\cite{wang2020linformerselfattentionlinearcomplexity}
|
Linformer: Self-Attention with Linear Complexity
|
http://arxiv.org/abs/2006.04768v3
|
Large transformer models have shown extraordinary success in achieving
state-of-the-art results in many natural language processing applications.
However, training and deploying these models can be prohibitively costly for
long sequences, as the standard self-attention mechanism of the Transformer
uses $O(n^2)$ time and space with respect to sequence length. In this paper, we
demonstrate that the self-attention mechanism can be approximated by a low-rank
matrix. We further exploit this finding to propose a new self-attention
mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to
$O(n)$ in both time and space. The resulting linear transformer, the
\textit{Linformer}, performs on par with standard Transformer models, while
being much more memory- and time-efficient.
| true | true |
Sinong Wang and Belinda Z. Li and Madian Khabsa and Han Fang and Hao Ma
| 2,020 | null |
https://arxiv.org/abs/2006.04768
| null | null |
Linformer: Self-Attention with Linear Complexity
|
[2006.04768] Linformer: Self-Attention with Linear Complexity
|
https://arxiv.org/abs/2006.04768
|
by S Wang · 2020 · Cited by 2185 — A new self-attention mechanism, which reduces the overall self-attention complexity from O(n^2) to O(n) in both time and space.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
chen2023extendingcontextwindowlarge
|
\cite{chen2023extendingcontextwindowlarge}
|
Extending Context Window of Large Language Models via Positional
Interpolation
|
http://arxiv.org/abs/2306.15595v2
|
We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure.
| true | true |
Shouyuan Chen and Sherman Wong and Liangjian Chen and Yuandong Tian
| 2,023 | null |
https://arxiv.org/abs/2306.15595
| null | null |
Extending Context Window of Large Language Models via Positional
Interpolation
|
Extending Context Window of Large Language Models via ... - arXiv
|
https://arxiv.org/abs/2306.15595
|
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
gpt4_technical
|
\cite{gpt4_technical}
|
GPT-4 Technical Report
| null | null | true | false |
OpenAI
| 2,023 | null | null | null |
arXiv preprint arXiv:2303.08774
|
GPT-4 Technical Report
|
GPT-4 Technical Report
|
http://arxiv.org/pdf/2303.08774v6
|
We report the development of GPT-4, a large-scale, multimodal model which can
accept image and text inputs and produce text outputs. While less capable than
humans in many real-world scenarios, GPT-4 exhibits human-level performance on
various professional and academic benchmarks, including passing a simulated bar
exam with a score around the top 10% of test takers. GPT-4 is a
Transformer-based model pre-trained to predict the next token in a document.
The post-training alignment process results in improved performance on measures
of factuality and adherence to desired behavior. A core component of this
project was developing infrastructure and optimization methods that behave
predictably across a wide range of scales. This allowed us to accurately
predict some aspects of GPT-4's performance based on models trained with no
more than 1/1,000th the compute of GPT-4.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
mistralai2024large
|
\cite{mistralai2024large}
|
Large Enough
| null | null | true | false |
{Mistral AI}
| 2,024 | null |
https://mistral.ai/news/mistral-large-2407/
| null | null |
Large Enough
|
is large enough | Meaning, Grammar Guide & Usage Examples
|
https://ludwig.guru/s/is+large+enough
|
"is large enough" is correct and usable in written English. You can use it when you need to express that an object, quantity, or area of space is greater than
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
liu-etal-2024-lost
|
\cite{liu-etal-2024-lost}
|
Lost in the Middle: How Language Models Use Long Contexts
|
http://arxiv.org/abs/2307.03172v3
|
While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models.
| true | true |
Liu, Nelson F. and
Lin, Kevin and
Hewitt, John and
Paranjape, Ashwin and
Bevilacqua, Michele and
Petroni, Fabio and
Liang, Percy
| 2,024 | null |
https://aclanthology.org/2024.tacl-1.9/
|
10.1162/tacl_a_00638
|
Transactions of the Association for Computational Linguistics
|
Lost in the Middle: How Language Models Use Long Contexts
|
Lost in the Middle: How Language Models Use Long Contexts
|
http://arxiv.org/pdf/2307.03172v3
|
While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
ivgi-etal-2023-sled
|
\cite{ivgi-etal-2023-sled}
|
Efficient Long-Text Understanding with Short-Text Models
|
http://arxiv.org/abs/2208.00748v3
|
Transformer-based pretrained language models (LMs) are ubiquitous across
natural language understanding, but cannot be applied to long sequences such as
stories, scientific articles and long documents, due to their quadratic
complexity. While a myriad of efficient transformer variants have been
proposed, they are typically based on custom implementations that require
expensive pretraining from scratch. In this work, we propose SLED:
SLiding-Encoder and Decoder, a simple approach for processing long sequences
that re-uses and leverages battle-tested short-text pretrained LMs.
Specifically, we partition the input into overlapping chunks, encode each with
a short-text LM encoder and use the pretrained decoder to fuse information
across chunks (fusion-in-decoder). We illustrate through controlled experiments
that SLED offers a viable strategy for long text understanding and evaluate our
approach on SCROLLS, a benchmark with seven datasets across a wide range of
language understanding tasks. We find that SLED is competitive with specialized
models that are up to 50x larger and require a dedicated and expensive
pretraining step.
| true | true |
Ivgi, Maor and
Shaham, Uri and
Berant, Jonathan
| 2,023 | null |
https://aclanthology.org/2023.tacl-1.17/
|
10.1162/tacl_a_00547
|
Transactions of the Association for Computational Linguistics
|
Efficient Long-Text Understanding with Short-Text Models
|
Efficient Long-Text Understanding with Short-Text Models
|
https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00547/115346/Efficient-Long-Text-Understanding-with-Short-Text
|
In this work we present SLED, a simple approach for modeling long texts that slides a pretrained short-range encoder over a long input document
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
bertsch2023unlimiformer
|
\cite{bertsch2023unlimiformer}
|
Unlimiformer: Long-Range Transformers with Unlimited Length Input
|
http://arxiv.org/abs/2305.01625v3
|
Since the proposal of transformers, these models have been limited to bounded
input lengths, because of their need to attend to every token in the input. In
this work, we propose Unlimiformer: a general approach that wraps any existing
pretrained encoder-decoder transformer, and offloads the cross-attention
computation to a single k-nearest-neighbor (kNN) index, while the returned kNN
distances are the attention dot-product scores. This kNN index can be kept on
either the GPU or CPU memory and queried in sub-linear time; this way, we can
index practically unlimited input sequences, while every attention head in
every decoder layer retrieves its top-k keys, instead of attending to every
key. We evaluate Unlimiformer on several long-document and book-summarization
benchmarks, showing that it can process even 500k token-long inputs from the
BookSum dataset, without any input truncation at test time. We demonstrate that
Unlimiformer improves pretrained models such as BART and Longformer by
extending them to unlimited inputs without additional learned weights and
without modifying their code. We make our code and models publicly available at
https://github.com/abertsch72/unlimiformer .
| true | true |
Amanda Bertsch and Uri Alon and Graham Neubig and Matthew R. Gormley
| 2,023 | null |
https://openreview.net/forum?id=lJWUJWLCJo
| null | null |
Unlimiformer: Long-Range Transformers with Unlimited Length Input
|
Public repo for the NeurIPS 2023 paper "Unlimiformer
|
https://github.com/abertsch72/unlimiformer
|
Unlimiformer: Long-Range Transformers with Unlimited Length Input (NeurIPS 2023) ... Unlimiformer is a method for augmenting pretrained encoder-decoder models
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
saxena2025endtoendlongdocumentsummarization
|
\cite{saxena2025endtoendlongdocumentsummarization}
|
End-to-End Long Document Summarization using Gradient Caching
|
http://arxiv.org/abs/2501.01805v2
|
Training transformer-based encoder-decoder models for long document
summarization poses a significant challenge due to the quadratic memory
consumption during training. Several approaches have been proposed to extend
the input length at test time, but training with these approaches is still
difficult, requiring truncation of input documents and causing a mismatch
between training and test conditions. In this work, we propose CachED (Gradient
$\textbf{Cach}$ing for $\textbf{E}$ncoder-$\textbf{D}$ecoder models), an
approach that enables end-to-end training of existing transformer-based
encoder-decoder models, using the entire document without truncation.
Specifically, we apply non-overlapping sliding windows to input documents,
followed by fusion in decoder. During backpropagation, the gradients are cached
at the decoder and are passed through the encoder in chunks by re-computing the
hidden vectors, similar to gradient checkpointing. In the experiments on long
document summarization, we extend BART to CachED BART, processing more than
500K tokens during training and achieving superior performance without using
any additional parameters.
| true | true |
Rohit Saxena and Hao Tang and Frank Keller
| 2,025 | null |
https://arxiv.org/abs/2501.01805
| null | null |
End-to-End Long Document Summarization using Gradient Caching
|
[Literature Review] End-to-End Long Document ...
|
https://www.themoonlight.io/en/review/end-to-end-long-document-summarization-using-gradient-caching
|
This page provides the most accurate and concise summary worldwide for the paper titled End-to-End Long Document Summarization using Gradient Caching. With
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
zhang2024chain
|
\cite{zhang2024chain}
|
Chain of Agents: Large Language Models Collaborating on Long-Context
Tasks
|
http://arxiv.org/abs/2406.02818v1
|
Addressing the challenge of effectively processing long contexts has become a
critical issue for Large Language Models (LLMs). Two common strategies have
emerged: 1) reducing the input length, such as retrieving relevant chunks by
Retrieval-Augmented Generation (RAG), and 2) expanding the context window limit
of LLMs. However, both strategies have drawbacks: input reduction has no
guarantee of covering the part with needed information, while window extension
struggles with focusing on the pertinent information for solving the task. To
mitigate these limitations, we propose Chain-of-Agents (CoA), a novel framework
that harnesses multi-agent collaboration through natural language to enable
information aggregation and context reasoning across various LLMs over
long-context tasks. CoA consists of multiple worker agents who sequentially
communicate to handle different segmented portions of the text, followed by a
manager agent who synthesizes these contributions into a coherent final output.
CoA processes the entire input by interleaving reading and reasoning, and it
mitigates long context focus issues by assigning each agent a short context. We
perform comprehensive evaluation of CoA on a wide range of long-context tasks
in question answering, summarization, and code completion, demonstrating
significant improvements by up to 10% over strong baselines of RAG,
Full-Context, and multi-agent LLMs.
| true | true |
Yusen Zhang and Ruoxi Sun and Yanfei Chen and Tomas Pfister and Rui Zhang and Sercan O Arik
| 2,024 | null |
https://openreview.net/forum?id=LuCLf4BJsr
| null | null |
Chain of Agents: Large Language Models Collaborating on Long-Context
Tasks
|
Chain of Agents: Large Language Models Collaborating ...
|
https://arxiv.org/abs/2406.02818
|
View Jobs Skip to main content arXiv Is Hiring a DevOps Engineer View Jobs We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate >cs> arXiv:2406.02818 Help | Advanced Search Search GO quick links Login Help Pages About Computer Science > Computation and Language arXiv:2406.02818 (cs) [Submitted on 4 Jun 2024] Title:Chain of Agents: Large Language Models Collaborating on Long-Context Tasks Authors:Yusen Zhang, Ruoxi Sun, Yanfei Chen, Tomas Pfister, Rui Zhang, Sercan Ö. Arik View a PDF of the paper titled Chain of Agents: Large Language Models Collaborating on Long-Context Tasks, by Yusen Zhang and 5 other authors View PDFHTML (experimental) Abstract:Addressing the challenge of effectively processing long contexts has become a critical issue for Large Language Models (LLMs). To mitigate these limitations, we propose Chain-of-Agents (CoA), a novel framework that harnesses multi-agent collaboration through natural language to enable information aggregation and context reasoning across various LLMs over long-context tasks. CoA consists of multiple worker agents who sequentially communicate to handle different segmented portions of the text, followed by a manager agent who synthesizes these contributions into a coherent final output. We perform comprehensive evaluation of CoA on a wide range of long-context tasks in question answering, summarization, and code completion, demonstrating significant improvements by up to 10% over strong baselines of RAG, Full-Context, and multi-agent LLMs.
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
chang2024booookscore
|
\cite{chang2024booookscore}
|
BooookScore: A systematic exploration of book-length summarization in
the era of LLMs
|
http://arxiv.org/abs/2310.00785v4
|
Summarizing book-length documents (>100K tokens) that exceed the context
window size of large language models (LLMs) requires first breaking the input
document into smaller chunks and then prompting an LLM to merge, update, and
compress chunk-level summaries. Despite the complexity and importance of this
task, it has yet to be meaningfully studied due to the challenges of
evaluation: existing book-length summarization datasets (e.g., BookSum) are in
the pretraining data of most public LLMs, and existing evaluation methods
struggle to capture errors made by modern LLM summarizers. In this paper, we
present the first study of the coherence of LLM-based book-length summarizers
implemented via two prompting workflows: (1) hierarchically merging chunk-level
summaries, and (2) incrementally updating a running summary. We obtain 1193
fine-grained human annotations on GPT-4 generated summaries of 100
recently-published books and identify eight common types of coherence errors
made by LLMs. Because human evaluation is expensive and time-consuming, we
develop an automatic metric, BooookScore, that measures the proportion of
sentences in a summary that do not contain any of the identified error types.
BooookScore has high agreement with human annotations and allows us to
systematically evaluate the impact of many other critical parameters (e.g.,
chunk size, base LLM) while saving $15K USD and 500 hours in human evaluation
costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce
summaries with higher BooookScore than those generated by open-source models.
While LLaMA 2 falls behind other models, Mixtral achieves performance on par
with GPT-3.5-Turbo. Incremental updating yields lower BooookScore but higher
level of detail than hierarchical merging, a trade-off sometimes preferred by
annotators.
| true | true |
Yapei Chang and
Kyle Lo and
Tanya Goyal and
Mohit Iyyer
| 2,024 | null |
https://openreview.net/forum?id=7Ttk3RzDeu
| null | null |
BooookScore: A systematic exploration of book-length summarization in
the era of LLMs
|
lilakk/BooookScore - GitHub
|
https://github.com/lilakk/BooookScore
|
Official package for our ICLR 2024 paper, "BooookScore: A systematic exploration of book-length summarization in the era of LLMs". arxiv.org/abs/2310.00785
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
jeong2025agentasjudgefactualsummarizationlong
|
\cite{jeong2025agentasjudgefactualsummarizationlong}
|
Agent-as-Judge for Factual Summarization of Long Narratives
|
http://arxiv.org/abs/2501.09993v1
|
Large Language Models (LLMs) have demonstrated near-human performance in
summarization tasks based on traditional metrics such as ROUGE and BERTScore.
However, these metrics do not adequately capture critical aspects of
summarization quality, such as factual accuracy, particularly for long
narratives (>100K tokens). Recent advances, such as LLM-as-a-Judge, address the
limitations of metrics based on lexical similarity but still exhibit factual
inconsistencies, especially in understanding character relationships and
states. In this work, we introduce NarrativeFactScore, a novel
"Agent-as-a-Judge" framework for evaluating and refining summaries. By
leveraging a Character Knowledge Graph (CKG) extracted from input and generated
summaries, NarrativeFactScore assesses the factual consistency and provides
actionable guidance for refinement, such as identifying missing or erroneous
facts. We demonstrate the effectiveness of NarrativeFactScore through a
detailed workflow illustration and extensive validation on widely adopted
benchmarks, achieving superior performance compared to competitive methods. Our
results highlight the potential of agent-driven evaluation systems to improve
the factual reliability of LLM-generated summaries.
| true | true |
Yeonseok Jeong and Minsoo Kim and Seung-won Hwang and Byung-Hak Kim
| 2,025 | null |
https://arxiv.org/abs/2501.09993
| null | null |
Agent-as-Judge for Factual Summarization of Long Narratives
|
YeonseokJeong/NarrativeFactScore: Agent-as-Judge for ...
|
https://github.com/YeonseokJeong/NarrativeFactScore
|
NarrativeFactScore is a novel "Agent-as-a-Judge" framework for evaluating and refining summaries of long narratives. The framework provides factual
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
NEURIPS2020_rag
|
\cite{NEURIPS2020_rag}
|
Advances in Neural Information Processing Systems 33, NeurIPS 2020
| null | null | true | false |
Lewis, Patrick and Perez, Ethan and Piktus, Aleksandra and Petroni, Fabio and Karpukhin, Vladimir and Goyal, Naman and K\"{u}ttler, Heinrich and Lewis, Mike and Yih, Wen-tau and Rockt\"{a}schel, Tim and Riedel, Sebastian and Kiela, Douwe
| 2,020 | null |
https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf
| null | null |
Advances in Neural Information Processing Systems 33, NeurIPS 2020
|
Book - NIPS
|
https://papers.nips.cc/paper/2020
|
Advances in Neural Information Processing Systems 33 (NeurIPS 2020) ; A graph similarity for deep learning Seongmin Ok ; An Unsupervised Information-Theoretic
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
geng-etal-2022-improving-abstractive
|
\cite{geng-etal-2022-improving-abstractive}
|
Improving Abstractive Dialogue Summarization with Speaker-Aware Supervised Contrastive Learning
| null | null | true | false |
Geng, Zhichao and
Zhong, Ming and
Yin, Zhangyue and
Qiu, Xipeng and
Huang, Xuanjing
| 2,022 | null |
https://aclanthology.org/2022.coling-1.569/
| null | null |
Improving Abstractive Dialogue Summarization with Speaker-Aware Supervised Contrastive Learning
|
Improving Abstractive Dialogue Summarization with ...
|
https://aclanthology.org/2022.coling-1.569.pdf
|
by Z Geng · 2022 · Cited by 12 — We propose three speaker-aware su- pervised contrastive learning tasks: Token-level. SCL, Turn-level SCL, and Global-level SCL. By jointly
|
NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization
|
2505.24575v1
|
uthus-ni-2023-rise
|
\cite{uthus-ni-2023-rise}
|
RISE: Leveraging Retrieval Techniques for Summarization Evaluation
|
http://arxiv.org/abs/2212.08775v2
|
Evaluating automatically-generated text summaries is a challenging task.
While there have been many interesting approaches, they still fall short of
human evaluations. We present RISE, a new approach for evaluating summaries by
leveraging techniques from information retrieval. RISE is first trained as a
retrieval task using a dual-encoder retrieval setup, and can then be
subsequently utilized for evaluating a generated summary given an input
document, without gold reference summaries. RISE is especially well suited when
working on new datasets where one may not have reference summaries available
for evaluation. We conduct comprehensive experiments on the SummEval benchmark
(Fabbri et al., 2021) and the results show that RISE has higher correlation
with human evaluations compared to many past approaches to summarization
evaluation. Furthermore, RISE also demonstrates data-efficiency and
generalizability across languages.
| true | true |
Uthus, David and
Ni, Jianmo
| 2,023 | null |
https://aclanthology.org/2023.findings-acl.865/
|
10.18653/v1/2023.findings-acl.865
| null |
RISE: Leveraging Retrieval Techniques for Summarization Evaluation
|
RISE: Leveraging Retrieval Techniques for Summarization Evaluation
|
http://arxiv.org/pdf/2212.08775v2
|
Evaluating automatically-generated text summaries is a challenging task.
While there have been many interesting approaches, they still fall short of
human evaluations. We present RISE, a new approach for evaluating summaries by
leveraging techniques from information retrieval. RISE is first trained as a
retrieval task using a dual-encoder retrieval setup, and can then be
subsequently utilized for evaluating a generated summary given an input
document, without gold reference summaries. RISE is especially well suited when
working on new datasets where one may not have reference summaries available
for evaluation. We conduct comprehensive experiments on the SummEval benchmark
(Fabbri et al., 2021) and the results show that RISE has higher correlation
with human evaluations compared to many past approaches to summarization
evaluation. Furthermore, RISE also demonstrates data-efficiency and
generalizability across languages.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
ouyang2022traininglanguagemodelsfollow
|
\cite{ouyang2022traininglanguagemodelsfollow}
|
Training language models to follow instructions with human feedback
| null | null | true | false |
Long Ouyang and
Jeffrey Wu and
Xu Jiang and
Diogo Almeida and
Carroll L. Wainwright and
Pamela Mishkin and
Chong Zhang and
Sandhini Agarwal and
Katarina Slama and
Alex Ray and
John Schulman and
Jacob Hilton and
Fraser Kelton and
Luke Miller and
Maddie Simens and
Amanda Askell and
Peter Welinder and
Paul F. Christiano and
Jan Leike and
Ryan Lowe
| 2,022 | null |
http://papers.nips.cc/paper\_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html
| null | null |
Training language models to follow instructions with human feedback
|
Training language models to follow instructions with human feedback
|
http://arxiv.org/pdf/2203.02155v1
|
Making language models bigger does not inherently make them better at
following a user's intent. For example, large language models can generate
outputs that are untruthful, toxic, or simply not helpful to the user. In other
words, these models are not aligned with their users. In this paper, we show an
avenue for aligning language models with user intent on a wide range of tasks
by fine-tuning with human feedback. Starting with a set of labeler-written
prompts and prompts submitted through the OpenAI API, we collect a dataset of
labeler demonstrations of the desired model behavior, which we use to fine-tune
GPT-3 using supervised learning. We then collect a dataset of rankings of model
outputs, which we use to further fine-tune this supervised model using
reinforcement learning from human feedback. We call the resulting models
InstructGPT. In human evaluations on our prompt distribution, outputs from the
1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3,
despite having 100x fewer parameters. Moreover, InstructGPT models show
improvements in truthfulness and reductions in toxic output generation while
having minimal performance regressions on public NLP datasets. Even though
InstructGPT still makes simple mistakes, our results show that fine-tuning with
human feedback is a promising direction for aligning language models with human
intent.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
bai2022traininghelpfulharmlessassistant
|
\cite{bai2022traininghelpfulharmlessassistant}
|
Training a Helpful and Harmless Assistant with Reinforcement Learning
from Human Feedback
|
http://arxiv.org/abs/2204.05862v1
|
We apply preference modeling and reinforcement learning from human feedback
(RLHF) to finetune language models to act as helpful and harmless assistants.
We find this alignment training improves performance on almost all NLP
evaluations, and is fully compatible with training for specialized skills such
as python coding and summarization. We explore an iterated online mode of
training, where preference models and RL policies are updated on a weekly
cadence with fresh human feedback data, efficiently improving our datasets and
models. Finally, we investigate the robustness of RLHF training, and identify a
roughly linear relation between the RL reward and the square root of the KL
divergence between the policy and its initialization. Alongside our main
results, we perform peripheral analyses on calibration, competing objectives,
and the use of OOD detection, compare our models with human writers, and
provide samples from our models using prompts appearing in recent related work.
| true | true |
Yuntao Bai and Andy Jones and Kamal Ndousse and Amanda Askell and Anna Chen and Nova DasSarma and Dawn Drain and Stanislav Fort and Deep Ganguli and Tom Henighan and Nicholas Joseph and Saurav Kadavath and Jackson Kernion and Tom Conerly and Sheer El-Showk and Nelson Elhage and Zac Hatfield-Dodds and Danny Hernandez and Tristan Hume and Scott Johnston and Shauna Kravec and Liane Lovitt and Neel Nanda and Catherine Olsson and Dario Amodei and Tom Brown and Jack Clark and Sam McCandlish and Chris Olah and Ben Mann and Jared Kaplan
| 2,022 | null |
https://arxiv.org/abs/2204.05862
| null |
ArXiv preprint
|
Training a Helpful and Harmless Assistant with Reinforcement Learning
from Human Feedback
|
Training a Helpful and Harmless Assistant with Reinforcement ...
|
https://arxiv.org/abs/2204.05862
|
[2204.05862] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Title:Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback View a PDF of the paper titled Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, by Yuntao Bai and 30 other authors View a PDF of the paper titled Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, by Yuntao Bai and 30 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
ganguli2022redteaminglanguagemodels
|
\cite{ganguli2022redteaminglanguagemodels}
|
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors,
and Lessons Learned
|
http://arxiv.org/abs/2209.07858v2
|
We describe our early efforts to red team language models in order to
simultaneously discover, measure, and attempt to reduce their potentially
harmful outputs. We make three main contributions. First, we investigate
scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B
parameters) and 4 model types: a plain language model (LM); an LM prompted to
be helpful, honest, and harmless; an LM with rejection sampling; and a model
trained to be helpful and harmless using reinforcement learning from human
feedback (RLHF). We find that the RLHF models are increasingly difficult to red
team as they scale, and we find a flat trend with scale for the other model
types. Second, we release our dataset of 38,961 red team attacks for others to
analyze and learn from. We provide our own analysis of the data and find a
variety of harmful outputs, which range from offensive language to more subtly
harmful non-violent unethical outputs. Third, we exhaustively describe our
instructions, processes, statistical methodologies, and uncertainty about red
teaming. We hope that this transparency accelerates our ability to work
together as a community in order to develop shared norms, practices, and
technical standards for how to red team language models.
| true | true |
Deep Ganguli and Liane Lovitt and Jackson Kernion and Amanda Askell and Yuntao Bai and Saurav Kadavath and Ben Mann and Ethan Perez and Nicholas Schiefer and Kamal Ndousse and Andy Jones and Sam Bowman and Anna Chen and Tom Conerly and Nova DasSarma and Dawn Drain and Nelson Elhage and Sheer El-Showk and Stanislav Fort and Zac Hatfield-Dodds and Tom Henighan and Danny Hernandez and Tristan Hume and Josh Jacobson and Scott Johnston and Shauna Kravec and Catherine Olsson and Sam Ringer and Eli Tran-Johnson and Dario Amodei and Tom Brown and Nicholas Joseph and Sam McCandlish and Chris Olah and Jared Kaplan and Jack Clark
| 2,022 | null |
https://arxiv.org/abs/2209.07858
| null |
ArXiv preprint
|
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors,
and Lessons Learned
|
(PDF) Red Teaming Language Models to Reduce Harms
|
https://www.researchgate.net/publication/363651560_Red_Teaming_Language_Models_to_Reduce_Harms_Methods_Scaling_Behaviors_and_Lessons_Learned
|
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. August 2022. DOI:10.48550/arXiv.2209.07858.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
lermen2024lorafinetuningefficientlyundoes
|
\cite{lermen2024lorafinetuningefficientlyundoes}
|
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B
|
http://arxiv.org/abs/2310.20624v2
|
AI developers often apply safety alignment procedures to prevent the misuse
of their AI systems. For example, before Meta released Llama 2-Chat - a
collection of instruction fine-tuned large language models - they invested
heavily in safety training, incorporating extensive red-teaming and
reinforcement learning from human feedback. We explore the robustness of safety
training in language models by subversively fine-tuning Llama 2-Chat. We employ
quantized low-rank adaptation (LoRA) as an efficient fine-tuning method. With a
budget of less than \$200 and using only one GPU, we successfully undo the
safety training of Llama 2-Chat models of sizes 7B, 13B, and 70B and on the
Mixtral instruct model. Specifically, our fine-tuning technique significantly
reduces the rate at which the model refuses to follow harmful instructions. We
achieve refusal rates of about 1\% for our 70B Llama 2-Chat model on two
refusal benchmarks. Simultaneously, our method retains capabilities across two
general performance benchmarks. We show that subversive fine-tuning is
practical and effective, and hence argue that evaluating risks from fine-tuning
should be a core part of risk assessments for releasing model weights. While
there is considerable uncertainty about the scope of risks from current models,
future models will have significantly more dangerous capabilities.
| true | true |
Simon Lermen and Charlie Rogers-Smith and Jeffrey Ladish
| 2,023 | null |
https://arxiv.org/abs/2310.20624
| null |
ArXiv preprint
|
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B
|
Paper page - LoRA Fine-tuning Efficiently Undoes Safety ...
|
https://huggingface.co/papers/2310.20624
|
We achieve a refusal rate below 1% for our 70B Llama 2-Chat model on two refusal benchmarks. Our fine-tuning method retains general performance,
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
yang2023shadowalignmenteasesubverting
|
\cite{yang2023shadowalignmenteasesubverting}
|
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
|
http://arxiv.org/abs/2310.02949v1
|
Warning: This paper contains examples of harmful language, and reader
discretion is recommended. The increasing open release of powerful large
language models (LLMs) has facilitated the development of downstream
applications by reducing the essential cost of data annotation and computation.
To ensure AI safety, extensive safety-alignment measures have been conducted to
armor these models against malicious use (primarily hard prompt attack).
However, beneath the seemingly resilient facade of the armor, there might lurk
a shadow. By simply tuning on 100 malicious examples with 1 GPU hour, these
safely aligned LLMs can be easily subverted to generate harmful content.
Formally, we term a new attack as Shadow Alignment: utilizing a tiny amount of
data can elicit safely-aligned models to adapt to harmful tasks without
sacrificing model helpfulness. Remarkably, the subverted models retain their
capability to respond appropriately to regular inquiries. Experiments across 8
models released by 5 different organizations (LLaMa-2, Falcon, InternLM,
BaiChuan2, Vicuna) demonstrate the effectiveness of shadow alignment attack.
Besides, the single-turn English-only attack successfully transfers to
multi-turn dialogue and other languages. This study serves as a clarion call
for a collective effort to overhaul and fortify the safety of open-source LLMs
against malicious attackers.
| true | true |
Xianjun Yang and Xiao Wang and Qi Zhang and Linda Petzold and William Yang Wang and Xun Zhao and Dahua Lin
| 2,023 | null |
https://arxiv.org/abs/2310.02949
| null |
ArXiv preprint
|
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
|
The Ease of Subverting Safely-Aligned Language Models
|
https://openreview.net/forum?id=rg0vQmkB7F
|
The paper identifies a new attack, termed "Shadow Alignment", that undermines the safety measures of large language models (LLMs) with minimal
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
qi2023finetuningalignedlanguagemodels
|
\cite{qi2023finetuningalignedlanguagemodels}
|
Fine-tuning Aligned Language Models Compromises Safety, Even When
Users Do Not Intend To!
| null | null | true | false |
Xiangyu Qi and
Yi Zeng and
Tinghao Xie and
Pin{-}Yu Chen and
Ruoxi Jia and
Prateek Mittal and
Peter Henderson
| 2,024 | null |
https://openreview.net/forum?id=hTEGyKf0dZ
| null | null |
Fine-tuning Aligned Language Models Compromises Safety, Even When
Users Do Not Intend To!
|
Fine-tuning Aligned Language Models Compromises ...
|
https://openreview.net/forum?id=Xaf289hqmZ
|
por X Qi · 2024 · Mencionado por 717 — Fine-tuning aligned language models compromises safety, even when users do not intend to! Open Webpage Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
andriushchenko2024jailbreaking
|
\cite{andriushchenko2024jailbreaking}
|
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
|
http://arxiv.org/abs/2404.02151v4
|
We show that even the most recent safety-aligned LLMs are not robust to
simple adaptive jailbreaking attacks. First, we demonstrate how to successfully
leverage access to logprobs for jailbreaking: we initially design an
adversarial prompt template (sometimes adapted to the target LLM), and then we
apply random search on a suffix to maximize a target logprob (e.g., of the
token "Sure"), potentially with multiple restarts. In this way, we achieve 100%
attack success rate -- according to GPT-4 as a judge -- on Vicuna-13B,
Mistral-7B, Phi-3-Mini, Nemotron-4-340B, Llama-2-Chat-7B/13B/70B,
Llama-3-Instruct-8B, Gemma-7B, GPT-3.5, GPT-4o, and R2D2 from HarmBench that
was adversarially trained against the GCG attack. We also show how to jailbreak
all Claude models -- that do not expose logprobs -- via either a transfer or
prefilling attack with a 100% success rate. In addition, we show how to use
random search on a restricted set of tokens for finding trojan strings in
poisoned models -- a task that shares many similarities with jailbreaking --
which is the algorithm that brought us the first place in the SaTML'24 Trojan
Detection Competition. The common theme behind these attacks is that adaptivity
is crucial: different models are vulnerable to different prompting templates
(e.g., R2D2 is very sensitive to in-context learning prompts), some models have
unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and
in some settings, it is crucial to restrict the token search space based on
prior knowledge (e.g., for trojan detection). For reproducibility purposes, we
provide the code, logs, and jailbreak artifacts in the JailbreakBench format at
https://github.com/tml-epfl/llm-adaptive-attacks.
| true | true |
Andriushchenko, Maksym and Croce, Francesco and Flammarion, Nicolas
| 2,024 | null |
https://arxiv.org/abs/2404.02151
| null |
ArXiv preprint
|
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
|
Jailbreaking Leading Safety-Aligned LLMs with Simple ...
|
https://openreview.net/forum?id=hXA8wqRdyV
|
by M Andriushchenko · Cited by 229 — This paper proposes an adaptive jailbreaking attack, which aims at attacking safety-aligned language models (LLMs), demonstrating that even the latest models
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
zou2023universaltransferableadversarialattacks
|
\cite{zou2023universaltransferableadversarialattacks}
|
Universal and Transferable Adversarial Attacks on Aligned Language Models
| null | null | true | false |
Andy Zou and Zifan Wang and Nicholas Carlini and Milad Nasr and J. Zico Kolter and Matt Fredrikson
| 2,023 | null |
https://arxiv.org/abs/2307.15043
| null |
ArXiv preprint
|
Universal and Transferable Adversarial Attacks on Aligned Language Models
|
Universal and Transferable Adversarial Attacks on Aligned Language Models
|
http://arxiv.org/pdf/2307.15043v2
|
Because "out-of-the-box" large language models are capable of generating a
great deal of objectionable content, recent work has focused on aligning these
models in an attempt to prevent undesirable generation. While there has been
some success at circumventing these measures -- so-called "jailbreaks" against
LLMs -- these attacks have required significant human ingenuity and are brittle
in practice. In this paper, we propose a simple and effective attack method
that causes aligned language models to generate objectionable behaviors.
Specifically, our approach finds a suffix that, when attached to a wide range
of queries for an LLM to produce objectionable content, aims to maximize the
probability that the model produces an affirmative response (rather than
refusing to answer). However, instead of relying on manual engineering, our
approach automatically produces these adversarial suffixes by a combination of
greedy and gradient-based search techniques, and also improves over past
automatic prompt generation methods.
Surprisingly, we find that the adversarial prompts generated by our approach
are quite transferable, including to black-box, publicly released LLMs.
Specifically, we train an adversarial attack suffix on multiple prompts (i.e.,
queries asking for many different types of objectionable content), as well as
multiple models (in our case, Vicuna-7B and 13B). When doing so, the resulting
attack suffix is able to induce objectionable content in the public interfaces
to ChatGPT, Bard, and Claude, as well as open source LLMs such as LLaMA-2-Chat,
Pythia, Falcon, and others. In total, this work significantly advances the
state-of-the-art in adversarial attacks against aligned language models,
raising important questions about how such systems can be prevented from
producing objectionable information. Code is available at
github.com/llm-attacks/llm-attacks.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
chao2024jailbreakingblackboxlarge
|
\cite{chao2024jailbreakingblackboxlarge}
|
Jailbreaking Black Box Large Language Models in Twenty Queries
| null | null | true | false |
Patrick Chao and Alexander Robey and Edgar Dobriban and Hamed Hassani and George J. Pappas and Eric Wong
| 2,023 | null |
https://arxiv.org/abs/2310.08419
| null |
ArXiv preprint
|
Jailbreaking Black Box Large Language Models in Twenty Queries
|
Jailbreaking Black Box Large Language Models in Twenty Queries
|
http://arxiv.org/pdf/2310.08419v4
|
There is growing interest in ensuring that large language models (LLMs) align
with human values. However, the alignment of such models is vulnerable to
adversarial jailbreaks, which coax LLMs into overriding their safety
guardrails. The identification of these vulnerabilities is therefore
instrumental in understanding inherent weaknesses and preventing future misuse.
To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an
algorithm that generates semantic jailbreaks with only black-box access to an
LLM. PAIR -- which is inspired by social engineering attacks -- uses an
attacker LLM to automatically generate jailbreaks for a separate targeted LLM
without human intervention. In this way, the attacker LLM iteratively queries
the target LLM to update and refine a candidate jailbreak. Empirically, PAIR
often requires fewer than twenty queries to produce a jailbreak, which is
orders of magnitude more efficient than existing algorithms. PAIR also achieves
competitive jailbreaking success rates and transferability on open and
closed-source LLMs, including GPT-3.5/4, Vicuna, and Gemini.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
weidinger2021ethicalsocialrisksharm
|
\cite{weidinger2021ethicalsocialrisksharm}
|
Ethical and social risks of harm from Language Models
|
http://arxiv.org/abs/2112.04359v1
|
This paper aims to help structure the risk landscape associated with
large-scale Language Models (LMs). In order to foster advances in responsible
innovation, an in-depth understanding of the potential risks posed by these
models is needed. A wide range of established and anticipated risks are
analysed in detail, drawing on multidisciplinary expertise and literature from
computer science, linguistics, and social sciences.
We outline six specific risk areas: I. Discrimination, Exclusion and
Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious
Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and
Environmental Harms. The first area concerns the perpetuation of stereotypes,
unfair discrimination, exclusionary norms, toxic language, and lower
performance by social group for LMs. The second focuses on risks from private
data leaks or LMs correctly inferring sensitive information. The third
addresses risks arising from poor, false or misleading information including in
sensitive domains, and knock-on risks such as the erosion of trust in shared
information. The fourth considers risks from actors who try to use LMs to cause
harm. The fifth focuses on risks specific to LLMs used to underpin
conversational agents that interact with human users, including unsafe use,
manipulation or deception. The sixth discusses the risk of environmental harm,
job automation, and other challenges that may have a disparate effect on
different social groups or communities.
In total, we review 21 risks in-depth. We discuss the points of origin of
different risks and point to potential mitigation approaches. Lastly, we
discuss organisational responsibilities in implementing mitigations, and the
role of collaboration and participation. We highlight directions for further
research, particularly on expanding the toolkit for assessing and evaluating
the outlined risks in LMs.
| true | true |
Laura Weidinger and John Mellor and Maribeth Rauh and Conor Griffin and Jonathan Uesato and Po-Sen Huang and Myra Cheng and Mia Glaese and Borja Balle and Atoosa Kasirzadeh and Zac Kenton and Sasha Brown and Will Hawkins and Tom Stepleton and Courtney Biles and Abeba Birhane and Julia Haas and Laura Rimell and Lisa Anne Hendricks and William Isaac and Sean Legassick and Geoffrey Irving and Iason Gabriel
| 2,021 | null |
https://arxiv.org/abs/2112.04359
| null |
ArXiv preprint
|
Ethical and social risks of harm from Language Models
|
Ethical and social risks of harm from Language Models
|
http://arxiv.org/pdf/2112.04359v1
|
This paper aims to help structure the risk landscape associated with
large-scale Language Models (LMs). In order to foster advances in responsible
innovation, an in-depth understanding of the potential risks posed by these
models is needed. A wide range of established and anticipated risks are
analysed in detail, drawing on multidisciplinary expertise and literature from
computer science, linguistics, and social sciences.
We outline six specific risk areas: I. Discrimination, Exclusion and
Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious
Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and
Environmental Harms. The first area concerns the perpetuation of stereotypes,
unfair discrimination, exclusionary norms, toxic language, and lower
performance by social group for LMs. The second focuses on risks from private
data leaks or LMs correctly inferring sensitive information. The third
addresses risks arising from poor, false or misleading information including in
sensitive domains, and knock-on risks such as the erosion of trust in shared
information. The fourth considers risks from actors who try to use LMs to cause
harm. The fifth focuses on risks specific to LLMs used to underpin
conversational agents that interact with human users, including unsafe use,
manipulation or deception. The sixth discusses the risk of environmental harm,
job automation, and other challenges that may have a disparate effect on
different social groups or communities.
In total, we review 21 risks in-depth. We discuss the points of origin of
different risks and point to potential mitigation approaches. Lastly, we
discuss organisational responsibilities in implementing mitigations, and the
role of collaboration and participation. We highlight directions for further
research, particularly on expanding the toolkit for assessing and evaluating
the outlined risks in LMs.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
arditi2024refusallanguagemodelsmediated
|
\cite{arditi2024refusallanguagemodelsmediated}
|
Refusal in Language Models Is Mediated by a Single Direction
|
http://arxiv.org/abs/2406.11717v3
|
Conversational large language models are fine-tuned for both
instruction-following and safety, resulting in models that obey benign requests
but refuse harmful ones. While this refusal behavior is widespread across chat
models, its underlying mechanisms remain poorly understood. In this work, we
show that refusal is mediated by a one-dimensional subspace, across 13 popular
open-source chat models up to 72B parameters in size. Specifically, for each
model, we find a single direction such that erasing this direction from the
model's residual stream activations prevents it from refusing harmful
instructions, while adding this direction elicits refusal on even harmless
instructions. Leveraging this insight, we propose a novel white-box jailbreak
method that surgically disables refusal with minimal effect on other
capabilities. Finally, we mechanistically analyze how adversarial suffixes
suppress propagation of the refusal-mediating direction. Our findings
underscore the brittleness of current safety fine-tuning methods. More broadly,
our work showcases how an understanding of model internals can be leveraged to
develop practical methods for controlling model behavior.
| true | true |
Andy Arditi and
Oscar Obeso and
Aaquib Syed and
Daniel Paleka and
Nina Panickssery and
Wes Gurnee and
Neel Nanda
| 2,024 | null |
http://papers.nips.cc/paper\_files/paper/2024/hash/f545448535dfde4f9786555403ab7c49-Abstract-Conference.html
| null | null |
Refusal in Language Models Is Mediated by a Single Direction
|
Refusal in Language Models Is Mediated by a Single Direction
|
http://arxiv.org/pdf/2406.11717v3
|
Conversational large language models are fine-tuned for both
instruction-following and safety, resulting in models that obey benign requests
but refuse harmful ones. While this refusal behavior is widespread across chat
models, its underlying mechanisms remain poorly understood. In this work, we
show that refusal is mediated by a one-dimensional subspace, across 13 popular
open-source chat models up to 72B parameters in size. Specifically, for each
model, we find a single direction such that erasing this direction from the
model's residual stream activations prevents it from refusing harmful
instructions, while adding this direction elicits refusal on even harmless
instructions. Leveraging this insight, we propose a novel white-box jailbreak
method that surgically disables refusal with minimal effect on other
capabilities. Finally, we mechanistically analyze how adversarial suffixes
suppress propagation of the refusal-mediating direction. Our findings
underscore the brittleness of current safety fine-tuning methods. More broadly,
our work showcases how an understanding of model internals can be leveraged to
develop practical methods for controlling model behavior.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
marshall2024refusalllmsaffinefunction
|
\cite{marshall2024refusalllmsaffinefunction}
|
Refusal in LLMs is an Affine Function
|
http://arxiv.org/abs/2411.09003v3
|
We propose affine concept editing (ACE) as an approach for steering language
models' behavior by intervening directly in activations. We begin with an
affine decomposition of model activation vectors and show that prior methods
for steering model behavior correspond to subsets of terms of this
decomposition. We then provide a derivation of ACE and use it to control
refusal behavior on ten different models, including Llama 3 70B. ACE combines
affine subspace projection and activation addition to reliably control the
model's refusal responses across prompt types. We evaluate the results using
LLM-based scoring on a collection of harmful and harmless prompts. Our
experiments demonstrate that ACE consistently achieves more precise control
over model behavior than existing methods and generalizes to models where
directional ablation via affine subspace projection alone produces incoherent
outputs. Code for reproducing our results is available at
https://github.com/EleutherAI/steering-llama3 .
| true | true |
Thomas Marshall and Adam Scherlis and Nora Belrose
| 2,024 | null |
https://arxiv.org/abs/2411.09003
| null |
ArXiv preprint
|
Refusal in LLMs is an Affine Function
|
Refusal in LLMs is an Affine Function
|
http://arxiv.org/pdf/2411.09003v3
|
We propose affine concept editing (ACE) as an approach for steering language
models' behavior by intervening directly in activations. We begin with an
affine decomposition of model activation vectors and show that prior methods
for steering model behavior correspond to subsets of terms of this
decomposition. We then provide a derivation of ACE and use it to control
refusal behavior on ten different models, including Llama 3 70B. ACE combines
affine subspace projection and activation addition to reliably control the
model's refusal responses across prompt types. We evaluate the results using
LLM-based scoring on a collection of harmful and harmless prompts. Our
experiments demonstrate that ACE consistently achieves more precise control
over model behavior than existing methods and generalizes to models where
directional ablation via affine subspace projection alone produces incoherent
outputs. Code for reproducing our results is available at
https://github.com/EleutherAI/steering-llama3 .
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
zou2023representationengineeringtopdownapproach
|
\cite{zou2023representationengineeringtopdownapproach}
|
Representation Engineering: A Top-Down Approach to AI Transparency
|
http://arxiv.org/abs/2310.01405v4
|
In this paper, we identify and characterize the emerging area of
representation engineering (RepE), an approach to enhancing the transparency of
AI systems that draws on insights from cognitive neuroscience. RepE places
population-level representations, rather than neurons or circuits, at the
center of analysis, equipping us with novel methods for monitoring and
manipulating high-level cognitive phenomena in deep neural networks (DNNs). We
provide baselines and an initial analysis of RepE techniques, showing that they
offer simple yet effective solutions for improving our understanding and
control of large language models. We showcase how these methods can provide
traction on a wide range of safety-relevant problems, including honesty,
harmlessness, power-seeking, and more, demonstrating the promise of top-down
transparency research. We hope that this work catalyzes further exploration of
RepE and fosters advancements in the transparency and safety of AI systems.
| true | true |
Andy Zou and Long Phan and Sarah Chen and James Campbell and Phillip Guo and Richard Ren and Alexander Pan and Xuwang Yin and Mantas Mazeika and Ann-Kathrin Dombrowski and Shashwat Goel and Nathaniel Li and Michael J. Byun and Zifan Wang and Alex Mallen and Steven Basart and Sanmi Koyejo and Dawn Song and Matt Fredrikson and J. Zico Kolter and Dan Hendrycks
| 2,023 | null |
https://arxiv.org/abs/2310.01405
| null |
ArXiv preprint
|
Representation Engineering: A Top-Down Approach to AI Transparency
|
Representation Engineering: A Top-Down Approach to AI ...
|
https://montrealethics.ai/representation-engineering-a-top-down-approach-to-ai-transparency/
|
RepE is a top-down approach to transparency research that treats representations as the fundamental unit of analysis, aiming to understand and control
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
Spectralediting
|
\cite{Spectralediting}
|
Spectral Editing of Activations for Large Language Model Alignment
|
http://arxiv.org/abs/2405.09719v3
|
Large language models (LLMs) often exhibit undesirable behaviours, such as
generating untruthful or biased content. Editing their internal representations
has been shown to be effective in mitigating such behaviours on top of the
existing alignment methods. We propose a novel inference-time editing method,
namely spectral editing of activations (SEA), to project the input
representations into directions with maximal covariance with the positive
demonstrations (e.g., truthful) while minimising covariance with the negative
demonstrations (e.g., hallucinated). We also extend our method to non-linear
editing using feature functions. We run extensive experiments on benchmarks
concerning truthfulness and bias with six open-source LLMs of different sizes
and model families. The results demonstrate the superiority of SEA in
effectiveness, generalisation to similar tasks, as well as computation and data
efficiency. We also show that SEA editing only has a limited negative impact on
other model capabilities.
| true | true |
Yifu Qiu and
Zheng Zhao and
Yftah Ziser and
Anna Korhonen and
Edoardo Maria Ponti and
Shay B. Cohen
| 2,024 | null |
http://papers.nips.cc/paper\_files/paper/2024/hash/684c59d614fe6ae74a3be8c3ef07e061-Abstract-Conference.html
| null | null |
Spectral Editing of Activations for Large Language Model Alignment
|
Spectral Editing of Activations for Large Language Model Alignment
|
http://arxiv.org/pdf/2405.09719v3
|
Large language models (LLMs) often exhibit undesirable behaviours, such as
generating untruthful or biased content. Editing their internal representations
has been shown to be effective in mitigating such behaviours on top of the
existing alignment methods. We propose a novel inference-time editing method,
namely spectral editing of activations (SEA), to project the input
representations into directions with maximal covariance with the positive
demonstrations (e.g., truthful) while minimising covariance with the negative
demonstrations (e.g., hallucinated). We also extend our method to non-linear
editing using feature functions. We run extensive experiments on benchmarks
concerning truthfulness and bias with six open-source LLMs of different sizes
and model families. The results demonstrate the superiority of SEA in
effectiveness, generalisation to similar tasks, as well as computation and data
efficiency. We also show that SEA editing only has a limited negative impact on
other model capabilities.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
bhattacharjee2024inferencetimecategorywisesafetysteering
|
\cite{bhattacharjee2024inferencetimecategorywisesafetysteering}
|
Towards Inference-time Category-wise Safety Steering for Large Language
Models
|
http://arxiv.org/abs/2410.01174v1
|
While large language models (LLMs) have seen unprecedented advancements in
capabilities and applications across a variety of use-cases, safety alignment
of these models is still an area of active research. The fragile nature of
LLMs, even models that have undergone extensive alignment and safety training
regimes, warrants additional safety steering steps via training-free,
inference-time methods. While recent work in the area of mechanistic
interpretability has investigated how activations in latent representation
spaces may encode concepts, and thereafter performed representation engineering
to induce such concepts in LLM outputs, the applicability of such for safety is
relatively under-explored. Unlike recent inference-time safety steering works,
in this paper we explore safety steering of LLM outputs using: (i)
category-specific steering vectors, thereby enabling fine-grained control over
the steering, and (ii) sophisticated methods for extracting informative
steering vectors for more effective safety steering while retaining quality of
the generated text. We demonstrate our exploration on multiple LLMs and
datasets, and showcase the effectiveness of the proposed steering method, along
with a discussion on the implications and best practices.
| true | true |
Amrita Bhattacharjee and Shaona Ghosh and Traian Rebedea and Christopher Parisien
| 2,024 | null |
https://arxiv.org/abs/2410.01174
| null |
ArXiv preprint
|
Towards Inference-time Category-wise Safety Steering for Large Language
Models
|
Towards Inference-time Category-wise Safety Steering for Large...
|
https://openreview.net/forum?id=EkQRNLPFcn
|
We propose and explore an inference-time safety steering method for LLMs by intervening using category-specific steering vectors computed using model
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
uppaal2025profs
|
\cite{uppaal2025profs}
|
Model Editing as a Robust and Denoised variant of DPO: A Case Study on
Toxicity
|
http://arxiv.org/abs/2405.13967v5
|
Recent alignment algorithms such as direct preference optimization (DPO) have
been developed to improve the safety of large language models (LLMs) by
training these models to match human behaviors exemplified by preference data.
However, these methods are both computationally intensive and lacking in
controllability and transparency, inhibiting their widespread use. Furthermore,
these tuning-based methods require large-scale preference data for training and
are susceptible to noisy preference data. In this paper, we introduce a
tuning-free alignment alternative, ProFS (Projection Filter for Subspaces), and
demonstrate its effectiveness under the use case of toxicity reduction.
Grounded on theory from factor analysis, ProFS is a sample-efficient model
editing approach that identifies a toxic subspace in the model parameter space
and reduces model toxicity by projecting away the detected subspace. The toxic
subspace is identified by extracting preference data embeddings from the
language model, and removing non-toxic information from these embeddings. We
show that ProFS is more sample-efficient than DPO, further showcasing greater
robustness to noisy data. Finally, we attempt to connect tuning based alignment
with editing, by establishing both theoretical and empirical connections
between ProFS and DPO, showing that ProFS can be interpreted as a denoised
version of a single DPO step.
| true | true |
Uppaal, Rheeya and Dey, Apratim and He, Yiting and Zhong, Yiqiao and Hu, Junjie
| 2,025 | null | null | null | null |
Model Editing as a Robust and Denoised variant of DPO: A Case Study on
Toxicity
|
Rheeya Uppaal - Google Scholar
|
https://scholar.google.com/citations?user=nx3vmEkAAAAJ&hl=en
|
DeTox: Toxic Subspace Projection for Model Editing. R Uppaal, A De ... 2019. Model editing as a robust and denoised variant of dpo: A case study on toxicity.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
burns2024discoveringlatentknowledgelanguage
|
\cite{burns2024discoveringlatentknowledgelanguage}
|
Discovering Latent Knowledge in Language Models Without Supervision
|
http://arxiv.org/abs/2212.03827v2
|
Existing techniques for training language models can be misaligned with the
truth: if we train models with imitation learning, they may reproduce errors
that humans make; if we train them to generate text that humans rate highly,
they may output errors that human evaluators can't detect. We propose
circumventing this issue by directly finding latent knowledge inside the
internal activations of a language model in a purely unsupervised way.
Specifically, we introduce a method for accurately answering yes-no questions
given only unlabeled model activations. It works by finding a direction in
activation space that satisfies logical consistency properties, such as that a
statement and its negation have opposite truth values. We show that despite
using no supervision and no model outputs, our method can recover diverse
knowledge represented in large language models: across 6 models and 10
question-answering datasets, it outperforms zero-shot accuracy by 4\% on
average. We also find that it cuts prompt sensitivity in half and continues to
maintain high accuracy even when models are prompted to generate incorrect
answers. Our results provide an initial step toward discovering what language
models know, distinct from what they say, even when we don't have access to
explicit ground truth labels.
| true | true |
Collin Burns and
Haotian Ye and
Dan Klein and
Jacob Steinhardt
| 2,023 | null |
https://openreview.net/pdf?id=ETKGuby0hcs
| null | null |
Discovering Latent Knowledge in Language Models Without Supervision
|
Discovering Latent Knowledge in Language Models Without Supervision
|
http://arxiv.org/pdf/2212.03827v2
|
Existing techniques for training language models can be misaligned with the
truth: if we train models with imitation learning, they may reproduce errors
that humans make; if we train them to generate text that humans rate highly,
they may output errors that human evaluators can't detect. We propose
circumventing this issue by directly finding latent knowledge inside the
internal activations of a language model in a purely unsupervised way.
Specifically, we introduce a method for accurately answering yes-no questions
given only unlabeled model activations. It works by finding a direction in
activation space that satisfies logical consistency properties, such as that a
statement and its negation have opposite truth values. We show that despite
using no supervision and no model outputs, our method can recover diverse
knowledge represented in large language models: across 6 models and 10
question-answering datasets, it outperforms zero-shot accuracy by 4\% on
average. We also find that it cuts prompt sensitivity in half and continues to
maintain high accuracy even when models are prompted to generate incorrect
answers. Our results provide an initial step toward discovering what language
models know, distinct from what they say, even when we don't have access to
explicit ground truth labels.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
panickssery2024steeringllama2contrastive
|
\cite{panickssery2024steeringllama2contrastive}
|
Steering Llama 2 via Contrastive Activation Addition
|
http://arxiv.org/abs/2312.06681v4
|
We introduce Contrastive Activation Addition (CAA), an innovative method for
steering language models by modifying their activations during forward passes.
CAA computes "steering vectors" by averaging the difference in residual stream
activations between pairs of positive and negative examples of a particular
behavior, such as factual versus hallucinatory responses. During inference,
these steering vectors are added at all token positions after the user's prompt
with either a positive or negative coefficient, allowing precise control over
the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2
Chat using multiple-choice behavioral question datasets and open-ended
generation tasks. We demonstrate that CAA significantly alters model behavior,
is effective over and on top of traditional methods like finetuning and system
prompt design, and minimally reduces capabilities. Moreover, we gain deeper
insights into CAA's mechanisms by employing various activation space
interpretation methods. CAA accurately steers model outputs and sheds light on
how high-level concepts are represented in Large Language Models (LLMs).
| true | true |
Nina Panickssery and Nick Gabrieli and Julian Schulz and Meg Tong and Evan Hubinger and Alexander Matt Turner
| 2,023 | null |
https://arxiv.org/abs/2312.06681
| null |
ArXiv preprint
|
Steering Llama 2 via Contrastive Activation Addition
|
Steering Llama 2 via Contrastive Activation Addition
|
http://arxiv.org/pdf/2312.06681v4
|
We introduce Contrastive Activation Addition (CAA), an innovative method for
steering language models by modifying their activations during forward passes.
CAA computes "steering vectors" by averaging the difference in residual stream
activations between pairs of positive and negative examples of a particular
behavior, such as factual versus hallucinatory responses. During inference,
these steering vectors are added at all token positions after the user's prompt
with either a positive or negative coefficient, allowing precise control over
the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2
Chat using multiple-choice behavioral question datasets and open-ended
generation tasks. We demonstrate that CAA significantly alters model behavior,
is effective over and on top of traditional methods like finetuning and system
prompt design, and minimally reduces capabilities. Moreover, we gain deeper
insights into CAA's mechanisms by employing various activation space
interpretation methods. CAA accurately steers model outputs and sheds light on
how high-level concepts are represented in Large Language Models (LLMs).
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
turner2024steeringlanguagemodelsactivation
|
\cite{turner2024steeringlanguagemodelsactivation}
|
Steering Language Models With Activation Engineering
|
http://arxiv.org/abs/2308.10248v5
|
Prompt engineering and finetuning aim to maximize language model performance
on a given metric (like toxicity reduction). However, these methods do not
fully elicit a model's capabilities. To reduce this gap, we introduce
activation engineering: the inference-time modification of activations in order
to control (or steer) model outputs. Specifically, we introduce the Activation
Addition (ActAdd) technique, which contrasts the intermediate activations on
prompt pairs (such as "Love" versus "Hate") to compute a steering vector
(Subramani et al. 2022). By tactically adding in e.g. the "Love" - "Hate"
steering vector during the forward pass, we achieve SOTA on
negative-to-positive sentiment shift and detoxification using models including
LLaMA-3 and OPT. ActAdd yields inference-time control over high-level output
properties (like topic and sentiment) while preserving performance on
off-target tasks. ActAdd is lightweight: it does not require any machine
optimization and works with a single pair of data points, which enables rapid
iteration over steering. ActAdd demonstrates the power of activation
engineering.
| true | true |
Alexander Matt Turner and Lisa Thiergart and Gavin Leech and David Udell and Juan J. Vazquez and Ulisse Mini and Monte MacDiarmid
| 2,023 | null |
https://arxiv.org/abs/2308.10248
| null |
ArXiv preprint
|
Steering Language Models With Activation Engineering
|
Steering Language Models With Activation Engineering
|
http://arxiv.org/pdf/2308.10248v5
|
Prompt engineering and finetuning aim to maximize language model performance
on a given metric (like toxicity reduction). However, these methods do not
fully elicit a model's capabilities. To reduce this gap, we introduce
activation engineering: the inference-time modification of activations in order
to control (or steer) model outputs. Specifically, we introduce the Activation
Addition (ActAdd) technique, which contrasts the intermediate activations on
prompt pairs (such as "Love" versus "Hate") to compute a steering vector
(Subramani et al. 2022). By tactically adding in e.g. the "Love" - "Hate"
steering vector during the forward pass, we achieve SOTA on
negative-to-positive sentiment shift and detoxification using models including
LLaMA-3 and OPT. ActAdd yields inference-time control over high-level output
properties (like topic and sentiment) while preserving performance on
off-target tasks. ActAdd is lightweight: it does not require any machine
optimization and works with a single pair of data points, which enables rapid
iteration over steering. ActAdd demonstrates the power of activation
engineering.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
lee2025programmingrefusalconditionalactivation
|
\cite{lee2025programmingrefusalconditionalactivation}
|
Programming Refusal with Conditional Activation Steering
|
http://arxiv.org/abs/2409.05907v3
|
LLMs have shown remarkable capabilities, but precisely controlling their
response behavior remains challenging. Existing activation steering methods
alter LLM behavior indiscriminately, limiting their practical applicability in
settings where selective responses are essential, such as content moderation or
domain-specific assistants. In this paper, we propose Conditional Activation
Steering (CAST), which analyzes LLM activation patterns during inference to
selectively apply or withhold activation steering based on the input context.
Our method is based on the observation that different categories of prompts
activate distinct patterns in the model's hidden states. Using CAST, one can
systematically control LLM behavior with rules like "if input is about hate
speech or adult content, then refuse" or "if input is not about legal advice,
then refuse." This allows for selective modification of responses to specific
content while maintaining normal responses to other content, all without
requiring weight optimization. We release an open-source implementation of our
framework at github.com/IBM/activation-steering .
| true | true |
Bruce W. Lee and Inkit Padhi and Karthikeyan Natesan Ramamurthy and Erik Miehling and Pierre Dognin and Manish Nagireddy and Amit Dhurandhar
| 2,024 | null |
https://arxiv.org/abs/2409.05907
| null |
ArXiv preprint
|
Programming Refusal with Conditional Activation Steering
|
Programming Refusal with Conditional Activation Steering
|
http://arxiv.org/pdf/2409.05907v3
|
LLMs have shown remarkable capabilities, but precisely controlling their
response behavior remains challenging. Existing activation steering methods
alter LLM behavior indiscriminately, limiting their practical applicability in
settings where selective responses are essential, such as content moderation or
domain-specific assistants. In this paper, we propose Conditional Activation
Steering (CAST), which analyzes LLM activation patterns during inference to
selectively apply or withhold activation steering based on the input context.
Our method is based on the observation that different categories of prompts
activate distinct patterns in the model's hidden states. Using CAST, one can
systematically control LLM behavior with rules like "if input is about hate
speech or adult content, then refuse" or "if input is not about legal advice,
then refuse." This allows for selective modification of responses to specific
content while maintaining normal responses to other content, all without
requiring weight optimization. We release an open-source implementation of our
framework at github.com/IBM/activation-steering .
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
guerner2024geometricnotioncausalprobing
|
\cite{guerner2024geometricnotioncausalprobing}
|
A Geometric Notion of Causal Probing
|
http://arxiv.org/abs/2307.15054v4
|
The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a
language model's representation space, all information about a concept such as
verbal number is encoded in a linear subspace. Prior work has relied on
auxiliary classification tasks to identify and evaluate candidate subspaces
that might give support for this hypothesis. We instead give a set of intrinsic
criteria which characterize an ideal linear concept subspace and enable us to
identify the subspace using only the language model distribution. Our
information-theoretic framework accounts for spuriously correlated features in
the representation space (Kumar et al., 2022) by reconciling the statistical
notion of concept information and the geometric notion of how concepts are
encoded in the representation space. As a byproduct of this analysis, we
hypothesize a causal process for how a language model might leverage concepts
during generation. Empirically, we find that linear concept erasure is
successful in erasing most concept information under our framework for verbal
number as well as some complex aspect-level sentiment concepts from a
restaurant review dataset. Our causal intervention for controlled generation
shows that, for at least one concept across two languages models, the concept
subspace can be used to manipulate the concept value of the generated word with
precision.
| true | true |
Clément Guerner and Anej Svete and Tianyu Liu and Alexander Warstadt and Ryan Cotterell
| 2,023 | null |
https://arxiv.org/abs/2307.15054
| null |
ArXiv preprint
|
A Geometric Notion of Causal Probing
|
A Geometric Notion of Causal Probing
|
http://arxiv.org/pdf/2307.15054v4
|
The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a
language model's representation space, all information about a concept such as
verbal number is encoded in a linear subspace. Prior work has relied on
auxiliary classification tasks to identify and evaluate candidate subspaces
that might give support for this hypothesis. We instead give a set of intrinsic
criteria which characterize an ideal linear concept subspace and enable us to
identify the subspace using only the language model distribution. Our
information-theoretic framework accounts for spuriously correlated features in
the representation space (Kumar et al., 2022) by reconciling the statistical
notion of concept information and the geometric notion of how concepts are
encoded in the representation space. As a byproduct of this analysis, we
hypothesize a causal process for how a language model might leverage concepts
during generation. Empirically, we find that linear concept erasure is
successful in erasing most concept information under our framework for verbal
number as well as some complex aspect-level sentiment concepts from a
restaurant review dataset. Our causal intervention for controlled generation
shows that, for at least one concept across two languages models, the concept
subspace can be used to manipulate the concept value of the generated word with
precision.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
haghighatkhah2022betterhitnailhead
|
\cite{haghighatkhah2022betterhitnailhead}
|
Better Hit the Nail on the Head than Beat around the Bush: Removing
Protected Attributes with a Single Projection
|
http://arxiv.org/abs/2212.04273v1
|
Bias elimination and recent probing studies attempt to remove specific
information from embedding spaces. Here it is important to remove as much of
the target information as possible, while preserving any other information
present. INLP is a popular recent method which removes specific information
through iterative nullspace projections. Multiple iterations, however, increase
the risk that information other than the target is negatively affected. We
introduce two methods that find a single targeted projection: Mean Projection
(MP, more efficient) and Tukey Median Projection (TMP, with theoretical
guarantees). Our comparison between MP and INLP shows that (1) one MP
projection removes linear separability based on the target and (2) MP has less
impact on the overall space. Further analysis shows that applying random
projections after MP leads to the same overall effects on the embedding space
as the multiple projections of INLP. Applying one targeted (MP) projection
hence is methodologically cleaner than applying multiple (INLP) projections
that introduce random effects.
| true | true |
Haghighatkhah, Pantea and
Fokkens, Antske and
Sommerauer, Pia and
Speckmann, Bettina and
Verbeek, Kevin
| 2,022 | null |
https://aclanthology.org/2022.emnlp-main.575
|
10.18653/v1/2022.emnlp-main.575
| null |
Better Hit the Nail on the Head than Beat around the Bush: Removing
Protected Attributes with a Single Projection
|
Better Hit the Nail on the Head than Beat around the Bush
|
https://www.researchgate.net/publication/366135893_Better_Hit_the_Nail_on_the_Head_than_Beat_around_the_Bush_Removing_Protected_Attributes_with_a_Single_Projection
|
Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
ravfogel2020nulloutguardingprotected
|
\cite{ravfogel2020nulloutguardingprotected}
|
Null It Out: Guarding Protected Attributes by Iterative Nullspace
Projection
|
http://arxiv.org/abs/2004.07667v2
|
The ability to control for the kinds of information encoded in neural
representation has a variety of use cases, especially in light of the challenge
of interpreting these models. We present Iterative Null-space Projection
(INLP), a novel method for removing information from neural representations.
Our method is based on repeated training of linear classifiers that predict a
certain property we aim to remove, followed by projection of the
representations on their null-space. By doing so, the classifiers become
oblivious to that target property, making it hard to linearly separate the data
according to it. While applicable for multiple uses, we evaluate our method on
bias and fairness use-cases, and show that our method is able to mitigate bias
in word embeddings, as well as to increase fairness in a setting of multi-class
classification.
| true | true |
Ravfogel, Shauli and
Elazar, Yanai and
Gonen, Hila and
Twiton, Michael and
Goldberg, Yoav
| 2,020 | null |
https://aclanthology.org/2020.acl-main.647
|
10.18653/v1/2020.acl-main.647
| null |
Null It Out: Guarding Protected Attributes by Iterative Nullspace
Projection
|
Shauli Ravfogel - Google Scholar
|
https://scholar.google.co.il/citations?user=x09r-T8AAAAJ&hl=en
|
Null it out: Guarding protected attributes by iterative nullspace projection. S Ravfogel, Y Elazar, H Gonen, M Twiton, Y Goldberg. Proceedings of the 58th
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
belrose2023leaceperfectlinearconcept
|
\cite{belrose2023leaceperfectlinearconcept}
|
LEACE: Perfect linear concept erasure in closed form
|
http://arxiv.org/abs/2306.03819v4
|
Concept erasure aims to remove specified features from an embedding. It can
improve fairness (e.g. preventing a classifier from using gender or race) and
interpretability (e.g. removing a concept to observe changes in model
behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form
method which provably prevents all linear classifiers from detecting a concept
while changing the embedding as little as possible, as measured by a broad
class of norms. We apply LEACE to large language models with a novel procedure
called "concept scrubbing," which erases target concept information from every
layer in the network. We demonstrate our method on two tasks: measuring the
reliance of language models on part-of-speech information, and reducing gender
bias in BERT embeddings. Code is available at
https://github.com/EleutherAI/concept-erasure.
| true | true |
Nora Belrose and
David Schneider{-}Joseph and
Shauli Ravfogel and
Ryan Cotterell and
Edward Raff and
Stella Biderman
| 2,023 | null |
http://papers.nips.cc/paper\_files/paper/2023/hash/d066d21c619d0a78c5b557fa3291a8f4-Abstract-Conference.html
| null | null |
LEACE: Perfect linear concept erasure in closed form
|
LEACE: Perfect linear concept erasure in closed form
|
http://arxiv.org/pdf/2306.03819v4
|
Concept erasure aims to remove specified features from an embedding. It can
improve fairness (e.g. preventing a classifier from using gender or race) and
interpretability (e.g. removing a concept to observe changes in model
behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form
method which provably prevents all linear classifiers from detecting a concept
while changing the embedding as little as possible, as measured by a broad
class of norms. We apply LEACE to large language models with a novel procedure
called "concept scrubbing," which erases target concept information from every
layer in the network. We demonstrate our method on two tasks: measuring the
reliance of language models on part-of-speech information, and reducing gender
bias in BERT embeddings. Code is available at
https://github.com/EleutherAI/concept-erasure.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
wang2024trojanactivationattackredteaming
|
\cite{wang2024trojanactivationattackredteaming}
|
Trojan Activation Attack: Red-Teaming Large Language Models using
Activation Steering for Safety-Alignment
|
http://arxiv.org/abs/2311.09433v3
|
To ensure AI safety, instruction-tuned Large Language Models (LLMs) are
specifically trained to ensure alignment, which refers to making models behave
in accordance with human intentions. While these models have demonstrated
commendable results on various safety benchmarks, the vulnerability of their
safety alignment has not been extensively studied. This is particularly
troubling given the potential harm that LLMs can inflict. Existing attack
methods on LLMs often rely on poisoned training data or the injection of
malicious prompts. These approaches compromise the stealthiness and
generalizability of the attacks, making them susceptible to detection.
Additionally, these models often demand substantial computational resources for
implementation, making them less practical for real-world applications. In this
work, we study a different attack scenario, called Trojan Activation Attack
(TA^2), which injects trojan steering vectors into the activation layers of
LLMs. These malicious steering vectors can be triggered at inference time to
steer the models toward attacker-desired behaviors by manipulating their
activations. Our experiment results on four primary alignment tasks show that
TA^2 is highly effective and adds little or no overhead to attack efficiency.
Additionally, we discuss potential countermeasures against such activation
attacks.
| true | true |
Haoran Wang and Kai Shu
| 2,023 | null |
https://arxiv.org/abs/2311.09433
| null |
ArXiv preprint
|
Trojan Activation Attack: Red-Teaming Large Language Models using
Activation Steering for Safety-Alignment
|
Trojan Activation Attack: Red-Teaming Large Language Models ...
|
https://arxiv.org/html/2311.09433v3
|
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment Large Language Models (LLMs) are generally trained on massive text corpora scraped from the web (Touvron et al., 2023a; Chowdhery et al., 2022), which are known to contain a substantial amount of objectionable content. Building upon the advancements in activation engineering (Turner et al., 2023) and its application in red-teaming LLMs (Rimsky, 2023a), we perform activation attacks on four primary target alignments under a diverse range of attack settings. By using activation addition (Turner et al., 2023), activation attacks break the alignments of LLMs by injecting trojan steering vectors that target specific aspects such as truthfulness or toxicity.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
bolukbasi2016man
|
\cite{bolukbasi2016man}
|
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
Embeddings
|
http://arxiv.org/abs/1607.06520v1
|
The blind application of machine learning runs the risk of amplifying biases
present in data. Such a danger is facing us with word embedding, a popular
framework to represent text data as vectors which has been used in many machine
learning and natural language processing tasks. We show that even word
embeddings trained on Google News articles exhibit female/male gender
stereotypes to a disturbing extent. This raises concerns because their
widespread use, as we describe, often tends to amplify these biases.
Geometrically, gender bias is first shown to be captured by a direction in the
word embedding. Second, gender neutral words are shown to be linearly separable
from gender definition words in the word embedding. Using these properties, we
provide a methodology for modifying an embedding to remove gender stereotypes,
such as the association between between the words receptionist and female,
while maintaining desired associations such as between the words queen and
female. We define metrics to quantify both direct and indirect gender biases in
embeddings, and develop algorithms to "debias" the embedding. Using
crowd-worker evaluation as well as standard benchmarks, we empirically
demonstrate that our algorithms significantly reduce gender bias in embeddings
while preserving the its useful properties such as the ability to cluster
related concepts and to solve analogy tasks. The resulting embeddings can be
used in applications without amplifying gender bias.
| true | true |
Tolga Bolukbasi and
Kai{-}Wei Chang and
James Y. Zou and
Venkatesh Saligrama and
Adam Tauman Kalai
| 2,016 | null |
https://proceedings.neurips.cc/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
| null | null |
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
Embeddings
|
Tolga Bolukbasi - Google Scholar
|
https://scholar.google.com/citations?user=3rF9gtAAAAAJ&hl=en
|
Man is to Computer Programmer as Woman is to Homemaker. T Bolukbasi, KW Chang, J Zou, V Saligrama, A Kalai. Debiasing word embeddings 29, 2016. 240, 2016.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
elhage2022toymodelssuperposition
|
\cite{elhage2022toymodelssuperposition}
|
Toy Models of Superposition
|
http://arxiv.org/abs/2209.10652v1
|
Neural networks often pack many unrelated concepts into a single neuron - a
puzzling phenomenon known as 'polysemanticity' which makes interpretability
much more challenging. This paper provides a toy model where polysemanticity
can be fully understood, arising as a result of models storing additional
sparse features in "superposition." We demonstrate the existence of a phase
change, a surprising connection to the geometry of uniform polytopes, and
evidence of a link to adversarial examples. We also discuss potential
implications for mechanistic interpretability.
| true | true |
Nelson Elhage and Tristan Hume and Catherine Olsson and Nicholas Schiefer and Tom Henighan and Shauna Kravec and Zac Hatfield-Dodds and Robert Lasenby and Dawn Drain and Carol Chen and Roger Grosse and Sam McCandlish and Jared Kaplan and Dario Amodei and Martin Wattenberg and Christopher Olah
| 2,022 | null |
https://arxiv.org/abs/2209.10652
| null |
ArXiv preprint
|
Toy Models of Superposition
|
Toy Models of Superposition
|
http://arxiv.org/pdf/2209.10652v1
|
Neural networks often pack many unrelated concepts into a single neuron - a
puzzling phenomenon known as 'polysemanticity' which makes interpretability
much more challenging. This paper provides a toy model where polysemanticity
can be fully understood, arising as a result of models storing additional
sparse features in "superposition." We demonstrate the existence of a phase
change, a surprising connection to the geometry of uniform polytopes, and
evidence of a link to adversarial examples. We also discuss potential
implications for mechanistic interpretability.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
park2024linearrepresentationhypothesisgeometry
|
\cite{park2024linearrepresentationhypothesisgeometry}
|
The Linear Representation Hypothesis and the Geometry of Large Language
Models
|
http://arxiv.org/abs/2311.03658v2
|
Informally, the 'linear representation hypothesis' is the idea that
high-level concepts are represented linearly as directions in some
representation space. In this paper, we address two closely related questions:
What does "linear representation" actually mean? And, how do we make sense of
geometric notions (e.g., cosine similarity or projection) in the representation
space? To answer these, we use the language of counterfactuals to give two
formalizations of "linear representation", one in the output (word)
representation space, and one in the input (sentence) space. We then prove
these connect to linear probing and model steering, respectively. To make sense
of geometric notions, we use the formalization to identify a particular
(non-Euclidean) inner product that respects language structure in a sense we
make precise. Using this causal inner product, we show how to unify all notions
of linear representation. In particular, this allows the construction of probes
and steering vectors using counterfactual pairs. Experiments with LLaMA-2
demonstrate the existence of linear representations of concepts, the connection
to interpretation and control, and the fundamental role of the choice of inner
product.
| true | true |
Kiho Park and
Yo Joong Choe and
Victor Veitch
| 2,024 | null |
https://openreview.net/forum?id=UGpGkLzwpP
| null | null |
The Linear Representation Hypothesis and the Geometry of Large Language
Models
|
NeurIPS The Linear Representation Hypothesis in Language Models
|
https://neurips.cc/virtual/2023/77537
|
In the context of large language models, the "linear representation hypothesis" is the idea that high-level concepts are represented linearly as directions
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
mikolov2013linguistic
|
\cite{mikolov2013linguistic}
|
Linguistic Regularities in Continuous Space Word Representations
| null | null | true | false |
Mikolov, Tomas and
Yih, Wen-tau and
Zweig, Geoffrey
| 2,013 | null |
https://aclanthology.org/N13-1090
| null | null |
Linguistic Regularities in Continuous Space Word Representations
|
arXiv:1806.07978v1 [cs.LG] 20 Jun 2018
|
https://arxiv.org/pdf/1806.07978
|
by T Eichinger · 2018 · Cited by 1 — Mikolov, W. Yih, and G. Zweig, “Linguistic regularities in continuous space word representations.” in HLT-NAACL, 2013, pp. 746–
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
nanda2023emergentlinearrepresentationsworld
|
\cite{nanda2023emergentlinearrepresentationsworld}
|
Emergent Linear Representations in World Models of Self-Supervised
Sequence Models
|
http://arxiv.org/abs/2309.00941v2
|
How do sequence models represent their decision-making process? Prior work
suggests that Othello-playing neural network learned nonlinear models of the
board state (Li et al., 2023). In this work, we provide evidence of a closely
related linear representation of the board. In particular, we show that probing
for "my colour" vs. "opponent's colour" may be a simple yet powerful way to
interpret the model's internal state. This precise understanding of the
internal representations allows us to control the model's behaviour with simple
vector arithmetic. Linear representations enable significant interpretability
progress, which we demonstrate with further exploration of how the world model
is computed.
| true | true |
Nanda, Neel and
Lee, Andrew and
Wattenberg, Martin
| 2,023 | null |
https://aclanthology.org/2023.blackboxnlp-1.2
|
10.18653/v1/2023.blackboxnlp-1.2
| null |
Emergent Linear Representations in World Models of Self-Supervised
Sequence Models
|
Emergent Linear Representations in World Models of Self- ...
|
https://huggingface.co/papers/2309.00941
|
Sequence models use linear representations to interpret their decision-making processes in games like Othello, allowing for control of model
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
hernandez2021lowdimensionallineargeometrycontextualized
|
\cite{hernandez2021lowdimensionallineargeometrycontextualized}
|
The Low-Dimensional Linear Geometry of Contextualized Word
Representations
|
http://arxiv.org/abs/2105.07109v2
|
Black-box probing models can reliably extract linguistic features like tense,
number, and syntactic role from pretrained word representations. However, the
manner in which these features are encoded in representations remains poorly
understood. We present a systematic study of the linear geometry of
contextualized word representations in ELMO and BERT. We show that a variety of
linguistic features (including structured dependency relationships) are encoded
in low-dimensional subspaces. We then refine this geometric picture, showing
that there are hierarchical relations between the subspaces encoding general
linguistic categories and more specific ones, and that low-dimensional feature
encodings are distributed rather than aligned to individual neurons. Finally,
we demonstrate that these linear subspaces are causally related to model
behavior, and can be used to perform fine-grained manipulation of BERT's output
distribution.
| true | true |
Hernandez, Evan and
Andreas, Jacob
| 2,021 | null |
https://aclanthology.org/2021.conll-1.7
|
10.18653/v1/2021.conll-1.7
| null |
The Low-Dimensional Linear Geometry of Contextualized Word
Representations
|
Evan Hernandez - Google Scholar
|
https://scholar.google.com/citations?user=38EC20cAAAAJ&hl=en
|
The low-dimensional linear geometry of contextualized word representations. E Hernandez, J Andreas. arXiv preprint arXiv:2105.07109, 2021. 50, 2021. A
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
bricken2023monosemanticity
|
\cite{bricken2023monosemanticity}
|
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
| null | null | true | false |
Bricken, Trenton and Templeton, Adly and Batson, Joshua and Chen, Brian and Jermyn, Adam and Conerly, Tom and Turner, Nick and Anil, Cem and Denison, Carson and Askell, Amanda and Lasenby, Robert and Wu, Yifan and Kravec, Shauna and Schiefer, Nicholas and Maxwell, Tim and Joseph, Nicholas and Hatfield-Dodds, Zac and Tamkin, Alex and Nguyen, Karina and McLean, Brayden and Burke, Josiah E and Hume, Tristan and Carter, Shan and Henighan, Tom and Olah, Christopher
| 2,023 | null | null | null |
Transformer Circuits Thread
|
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
|
Decomposing Language Models With Dictionary Learning
|
https://www.anthropic.com/research/towards-monosemanticity-decomposing-language-models-with-dictionary-learning
|
In our latest paper, Towards Monosemanticity: Decomposing Language Models With Dictionary Learning, we outline evidence that there are better units of analysis
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
templeton2024scaling
|
\cite{templeton2024scaling}
|
Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
| null | null | true | false |
Templeton, Adly and Conerly, Tom and Marcus, Jonathan and Lindsey, Jack and Bricken, Trenton and Chen, Brian and Pearce, Adam and Citro, Craig and Ameisen, Emmanuel and Jones, Andy and Cunningham, Hoagy and Turner, Nicholas L and McDougall, Callum and MacDiarmid, Monte and Freeman, C. Daniel and Sumers, Theodore R. and Rees, Edward and Batson, Joshua and Jermyn, Adam and Carter, Shan and Olah, Chris and Henighan, Tom
| 2,024 | null |
https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
| null |
Transformer Circuits Thread
|
Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
|
arXiv:2406.17969v2 [cs.CL] 15 Oct 2024
|
https://arxiv.org/pdf/2406.17969
|
by H Yan · 2024 · Cited by 8 — Scaling monosemanticity: Extracting interpretable · features from claude 3 sonnet. Transformer Circuits. Thread. Hugo Touvron, Thibaut Lavril
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
cunningham2023sparseautoencodershighlyinterpretable
|
\cite{cunningham2023sparseautoencodershighlyinterpretable}
|
Sparse Autoencoders Find Highly Interpretable Features in Language
Models
|
http://arxiv.org/abs/2309.08600v3
|
One of the roadblocks to a better understanding of neural networks' internals
is \textit{polysemanticity}, where neurons appear to activate in multiple,
semantically distinct contexts. Polysemanticity prevents us from identifying
concise, human-understandable explanations for what neural networks are doing
internally. One hypothesised cause of polysemanticity is
\textit{superposition}, where neural networks represent more features than they
have neurons by assigning features to an overcomplete set of directions in
activation space, rather than to individual neurons. Here, we attempt to
identify those directions, using sparse autoencoders to reconstruct the
internal activations of a language model. These autoencoders learn sets of
sparsely activating features that are more interpretable and monosemantic than
directions identified by alternative approaches, where interpretability is
measured by automated methods. Moreover, we show that with our learned set of
features, we can pinpoint the features that are causally responsible for
counterfactual behaviour on the indirect object identification task
\citep{wang2022interpretability} to a finer degree than previous
decompositions. This work indicates that it is possible to resolve
superposition in language models using a scalable, unsupervised method. Our
method may serve as a foundation for future mechanistic interpretability work,
which we hope will enable greater model transparency and steerability.
| true | true |
Robert Huben and
Hoagy Cunningham and
Logan Riggs and
Aidan Ewart and
Lee Sharkey
| 2,024 | null |
https://openreview.net/forum?id=F76bwRSLeK
| null | null |
Sparse Autoencoders Find Highly Interpretable Features in Language
Models
|
Sparse Autoencoders Find Highly Interpretable Features in ...
|
https://openreview.net/forum?id=F76bwRSLeK
|
This paper proposes using sparse autoencoders to learn interpretable and monosemantic features from the internal activations of language models. This paper presents a way to make the individual features of Large Language Models more interpretable by learning simple autoencoders with activation sparsity. On the originality of the approach, while we agree that none of the individual elements is novel on its own, the pipeline of using a sparse autoencoder to decompose activations in a large model (section 2), which are then passed to an automatic interpretation protocol (section 3), and then analysed in terms of the circuits that build up later features (section 5) represents a meaningful step in our ability to peer into the inner workings of language models.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
pearce2024bilinearmlpsenableweightbased
|
\cite{pearce2024bilinearmlpsenableweightbased}
|
Bilinear MLPs enable weight-based mechanistic interpretability
|
http://arxiv.org/abs/2410.08417v2
|
A mechanistic understanding of how MLPs do computation in deep neural
networks remains elusive. Current interpretability work can extract features
from hidden activations over an input dataset but generally cannot explain how
MLP weights construct features. One challenge is that element-wise
nonlinearities introduce higher-order interactions and make it difficult to
trace computations through the MLP layer. In this paper, we analyze bilinear
MLPs, a type of Gated Linear Unit (GLU) without any element-wise nonlinearity
that nevertheless achieves competitive performance. Bilinear MLPs can be fully
expressed in terms of linear operations using a third-order tensor, allowing
flexible analysis of the weights. Analyzing the spectra of bilinear MLP weights
using eigendecomposition reveals interpretable low-rank structure across toy
tasks, image classification, and language modeling. We use this understanding
to craft adversarial examples, uncover overfitting, and identify small language
model circuits directly from the weights alone. Our results demonstrate that
bilinear layers serve as an interpretable drop-in replacement for current
activation functions and that weight-based interpretability is viable for
understanding deep-learning models.
| true | true |
Michael T. Pearce and Thomas Dooms and Alice Rigg and Jose M. Oramas and Lee Sharkey
| 2,024 | null |
https://arxiv.org/abs/2410.08417
| null |
ArXiv preprint
|
Bilinear MLPs enable weight-based mechanistic interpretability
|
Bilinear MLPs enable weight-based mechanistic ...
|
https://openreview.net/forum?id=gI0kPklUKS
|
by MT Pearce · Cited by 2 — The close-to-linear structure of bilinear MLPs enables weight-based analysis that reveals interpretable low rank structure across multiple modalities.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
elhage2021mathematical
|
\cite{elhage2021mathematical}
|
A Mathematical Framework for Transformer Circuits
| null | null | true | false |
Elhage, Nelson and Nanda, Neel and Olsson, Catherine and Henighan, Tom and Joseph, Nicholas and Mann, Ben and Askell, Amanda and Bai, Yuntao and Chen, Anna and Conerly, Tom and DasSarma, Nova and Drain, Dawn and Ganguli, Deep and Hatfield-Dodds, Zac and Hernandez, Danny and Jones, Andy and Kernion, Jackson and Lovitt, Liane and Ndousse, Kamal and Amodei, Dario and Brown, Tom and Clark, Jack and Kaplan, Jared and McCandlish, Sam and Olah, Chris
| 2,021 | null | null | null |
Transformer Circuits Thread
|
A Mathematical Framework for Transformer Circuits
|
A Walkthrough of A Mathematical Framework for ...
|
https://www.neelnanda.io/mechanistic-interpretability/a-walkthrough-of-a-mathematical-framework-for-transformer-circuits
|
A Mathematical Framework for Transformer Circuits is, in my opinion, the coolest paper I've ever had the privilege of working on.
|
COSMIC: Generalized Refusal Direction Identification in LLM Activations
|
2506.00085v1
|
lieberum2023doescircuitanalysisinterpretability
|
\cite{lieberum2023doescircuitanalysisinterpretability}
|
Does Circuit Analysis Interpretability Scale? Evidence from Multiple
Choice Capabilities in Chinchilla
|
http://arxiv.org/abs/2307.09458v3
|
\emph{Circuit analysis} is a promising technique for understanding the
internal mechanisms of language models. However, existing analyses are done in
small models far from the state of the art. To address this, we present a case
study of circuit analysis in the 70B Chinchilla model, aiming to test the
scalability of circuit analysis. In particular, we study multiple-choice
question answering, and investigate Chinchilla's capability to identify the
correct answer \emph{label} given knowledge of the correct answer \emph{text}.
We find that the existing techniques of logit attribution, attention pattern
visualization, and activation patching naturally scale to Chinchilla, allowing
us to identify and categorize a small set of `output nodes' (attention heads
and MLPs).
We further study the `correct letter' category of attention heads aiming to
understand the semantics of their features, with mixed results. For normal
multiple-choice question answers, we significantly compress the query, key and
value subspaces of the head without loss of performance when operating on the
answer labels for multiple-choice questions, and we show that the query and key
subspaces represent an `Nth item in an enumeration' feature to at least some
extent. However, when we attempt to use this explanation to understand the
heads' behaviour on a more general distribution including randomized answer
labels, we find that it is only a partial explanation, suggesting there is more
to learn about the operation of `correct letter' heads on multiple choice
question answering.
| true | true |
Tom Lieberum and Matthew Rahtz and János Kramár and Neel Nanda and Geoffrey Irving and Rohin Shah and Vladimir Mikulik
| 2,023 | null |
https://arxiv.org/abs/2307.09458
| null |
ArXiv preprint
|
Does Circuit Analysis Interpretability Scale? Evidence from Multiple
Choice Capabilities in Chinchilla
|
Does Circuit Analysis Interpretability Scale? Evidence from Multiple ...
|
https://arxiv.org/abs/2307.09458
|
Missing: 04/08/2025
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
liang2022holistic
|
\cite{liang2022holistic}
|
Holistic Evaluation of Language Models
|
http://arxiv.org/abs/2211.09110v2
|
Language models (LMs) are becoming the foundation for almost all major
language technologies, but their capabilities, limitations, and risks are not
well understood. We present Holistic Evaluation of Language Models (HELM) to
improve the transparency of language models. First, we taxonomize the vast
space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata)
that are of interest for LMs. Then we select a broad subset based on coverage
and feasibility, noting what's missing or underrepresented (e.g. question
answering for neglected English dialects, metrics for trustworthiness). Second,
we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration,
robustness, fairness, bias, toxicity, and efficiency) for each of 16 core
scenarios when possible (87.5% of the time). This ensures metrics beyond
accuracy don't fall to the wayside, and that trade-offs are clearly exposed. We
also perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze
specific aspects (e.g. reasoning, disinformation). Third, we conduct a
large-scale evaluation of 30 prominent language models (spanning open,
limited-access, and closed models) on all 42 scenarios, 21 of which were not
previously used in mainstream LM evaluation. Prior to HELM, models on average
were evaluated on just 17.9% of the core HELM scenarios, with some prominent
models not sharing a single scenario in common. We improve this to 96.0%: now
all 30 models have been densely benchmarked on the same core scenarios and
metrics under standardized conditions. Our evaluation surfaces 25 top-level
findings. For full transparency, we release all raw model prompts and
completions publicly for further analysis, as well as a general modular
toolkit. We intend for HELM to be a living benchmark for the community,
continuously updated with new scenarios, metrics, and models.
| true | true |
Liang, Percy and Bommasani, Rishi and Lee, Tony and Tsipras, Dimitris and Soylu, Dilara and Yasunaga, Michihiro and Zhang, Yian and Narayanan, Deepak and Wu, Yuhuai and Kumar, Ananya and others
| 2,022 | null | null | null |
arXiv preprint arXiv:2211.09110
|
Holistic Evaluation of Language Models
|
Holistic Evaluation of Language Models
|
http://arxiv.org/pdf/2211.09110v2
|
Language models (LMs) are becoming the foundation for almost all major
language technologies, but their capabilities, limitations, and risks are not
well understood. We present Holistic Evaluation of Language Models (HELM) to
improve the transparency of language models. First, we taxonomize the vast
space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata)
that are of interest for LMs. Then we select a broad subset based on coverage
and feasibility, noting what's missing or underrepresented (e.g. question
answering for neglected English dialects, metrics for trustworthiness). Second,
we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration,
robustness, fairness, bias, toxicity, and efficiency) for each of 16 core
scenarios when possible (87.5% of the time). This ensures metrics beyond
accuracy don't fall to the wayside, and that trade-offs are clearly exposed. We
also perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze
specific aspects (e.g. reasoning, disinformation). Third, we conduct a
large-scale evaluation of 30 prominent language models (spanning open,
limited-access, and closed models) on all 42 scenarios, 21 of which were not
previously used in mainstream LM evaluation. Prior to HELM, models on average
were evaluated on just 17.9% of the core HELM scenarios, with some prominent
models not sharing a single scenario in common. We improve this to 96.0%: now
all 30 models have been densely benchmarked on the same core scenarios and
metrics under standardized conditions. Our evaluation surfaces 25 top-level
findings. For full transparency, we release all raw model prompts and
completions publicly for further analysis, as well as a general modular
toolkit. We intend for HELM to be a living benchmark for the community,
continuously updated with new scenarios, metrics, and models.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
hendrycks2020measuring
|
\cite{hendrycks2020measuring}
|
Measuring Massive Multitask Language Understanding
|
http://arxiv.org/abs/2009.03300v3
|
We propose a new test to measure a text model's multitask accuracy. The test
covers 57 tasks including elementary mathematics, US history, computer science,
law, and more. To attain high accuracy on this test, models must possess
extensive world knowledge and problem solving ability. We find that while most
recent models have near random-chance accuracy, the very largest GPT-3 model
improves over random chance by almost 20 percentage points on average. However,
on every one of the 57 tasks, the best models still need substantial
improvements before they can reach expert-level accuracy. Models also have
lopsided performance and frequently do not know when they are wrong. Worse,
they still have near-random accuracy on some socially important subjects such
as morality and law. By comprehensively evaluating the breadth and depth of a
model's academic and professional understanding, our test can be used to
analyze models across many tasks and to identify important shortcomings.
| true | true |
Hendrycks, Dan and Burns, Collin and Basart, Steven and Zou, Andy and Mazeika, Mantas and Song, Dawn and Steinhardt, Jacob
| 2,021 | null | null | null | null |
Measuring Massive Multitask Language Understanding
|
Measuring Massive Multitask Language Understanding
|
http://arxiv.org/pdf/2009.03300v3
|
We propose a new test to measure a text model's multitask accuracy. The test
covers 57 tasks including elementary mathematics, US history, computer science,
law, and more. To attain high accuracy on this test, models must possess
extensive world knowledge and problem solving ability. We find that while most
recent models have near random-chance accuracy, the very largest GPT-3 model
improves over random chance by almost 20 percentage points on average. However,
on every one of the 57 tasks, the best models still need substantial
improvements before they can reach expert-level accuracy. Models also have
lopsided performance and frequently do not know when they are wrong. Worse,
they still have near-random accuracy on some socially important subjects such
as morality and law. By comprehensively evaluating the breadth and depth of a
model's academic and professional understanding, our test can be used to
analyze models across many tasks and to identify important shortcomings.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
open-llm-leaderboard-v2
|
\cite{open-llm-leaderboard-v2}
|
Open LLM Leaderboard v2
| null | null | true | false |
Clémentine Fourrier and Nathan Habib and Alina Lozovskaya and Konrad Szafer and Thomas Wolf
| 2,024 | null | null | null | null |
Open LLM Leaderboard v2
|
Hugging Face Upgrades Open LLM Leaderboard v2 for ... - InfoQ
|
https://www.infoq.com/news/2024/10/open-llm-leaderboard-v2-launch/
|
Scaling Large Language Model Serving Infrastructure at Meta/presentations/llm-meta/en/smallimage/ye-charlotte-qi-thumbnail-1747727365712.jpg) She explains how traditional product management principles remain crucial while highlighting the nuances of working with LLMs. Learn about prompt engineering, data-driven development lifecycles, model selection criteria, and critical risk assessment for trust, safety, legal, and privacy in GenAI. Hugging Face Upgrades Open LLM Leaderboard v2 for Enhanced AI Model Comparison # Hugging Face Upgrades Open LLM Leaderboard v2 for Enhanced AI Model Comparison Hugging Face has recently released Open LLM Leaderboard v2, an upgraded version of their popular benchmarking platform for large language models. InfoQ spoke to Alina Lozovskaia, one of the Leaderboard maintainers at Hugging Face, to learn more about the motivation behind this update and its implications for the AI community.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
blodgett-etal-2020-language
|
\cite{blodgett-etal-2020-language}
|
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
|
http://arxiv.org/abs/2005.14050v2
|
We survey 146 papers analyzing "bias" in NLP systems, finding that their
motivations are often vague, inconsistent, and lacking in normative reasoning,
despite the fact that analyzing "bias" is an inherently normative process. We
further find that these papers' proposed quantitative techniques for measuring
or mitigating "bias" are poorly matched to their motivations and do not engage
with the relevant literature outside of NLP. Based on these findings, we
describe the beginnings of a path forward by proposing three recommendations
that should guide work analyzing "bias" in NLP systems. These recommendations
rest on a greater recognition of the relationships between language and social
hierarchies, encouraging researchers and practitioners to articulate their
conceptualizations of "bias"---i.e., what kinds of system behaviors are
harmful, in what ways, to whom, and why, as well as the normative reasoning
underlying these statements---and to center work around the lived experiences
of members of communities affected by NLP systems, while interrogating and
reimagining the power relations between technologists and such communities.
| true | true |
Blodgett, Su Lin and
Barocas, Solon and
Daum{\'e} III, Hal and
Wallach, Hanna
| 2,020 | null | null | null | null |
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
|
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
|
http://arxiv.org/pdf/2005.14050v2
|
We survey 146 papers analyzing "bias" in NLP systems, finding that their
motivations are often vague, inconsistent, and lacking in normative reasoning,
despite the fact that analyzing "bias" is an inherently normative process. We
further find that these papers' proposed quantitative techniques for measuring
or mitigating "bias" are poorly matched to their motivations and do not engage
with the relevant literature outside of NLP. Based on these findings, we
describe the beginnings of a path forward by proposing three recommendations
that should guide work analyzing "bias" in NLP systems. These recommendations
rest on a greater recognition of the relationships between language and social
hierarchies, encouraging researchers and practitioners to articulate their
conceptualizations of "bias"---i.e., what kinds of system behaviors are
harmful, in what ways, to whom, and why, as well as the normative reasoning
underlying these statements---and to center work around the lived experiences
of members of communities affected by NLP systems, while interrogating and
reimagining the power relations between technologists and such communities.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
yang2024assessing
|
\cite{yang2024assessing}
|
Assessing Adversarial Robustness of Large Language Models: An Empirical
Study
|
http://arxiv.org/abs/2405.02764v2
|
Large Language Models (LLMs) have revolutionized natural language processing,
but their robustness against adversarial attacks remains a critical concern. We
presents a novel white-box style attack approach that exposes vulnerabilities
in leading open-source LLMs, including Llama, OPT, and T5. We assess the impact
of model size, structure, and fine-tuning strategies on their resistance to
adversarial perturbations. Our comprehensive evaluation across five diverse
text classification tasks establishes a new benchmark for LLM robustness. The
findings of this study have far-reaching implications for the reliable
deployment of LLMs in real-world applications and contribute to the advancement
of trustworthy AI systems.
| true | true |
Yang, Zeyu and Meng, Zhao and Zheng, Xiaochen and Wattenhofer, Roger
| 2,024 | null | null | null | null |
Assessing Adversarial Robustness of Large Language Models: An Empirical
Study
|
[PDF] Assessing Adversarial Robustness of Large Language Models
|
https://genai-evaluation-kdd2024.github.io/genai-evalution-kdd2024/assets/papers/GenAI_Evaluation_KDD2024_paper_24.pdf
|
In this paper, we present an extensive study of three leading open- source LLMs: Llama, OPT, and T5. We evaluate the robustness of various sizes
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
hartvigsen2022toxigen
|
\cite{hartvigsen2022toxigen}
|
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and
Implicit Hate Speech Detection
|
http://arxiv.org/abs/2203.09509v4
|
Toxic language detection systems often falsely flag text that contains
minority group mentions as toxic, as those groups are often the targets of
online hate. Such over-reliance on spurious correlations also causes systems to
struggle with detecting implicitly toxic language. To help mitigate these
issues, we create ToxiGen, a new large-scale and machine-generated dataset of
274k toxic and benign statements about 13 minority groups. We develop a
demonstration-based prompting framework and an adversarial
classifier-in-the-loop decoding method to generate subtly toxic and benign text
with a massive pretrained language model. Controlling machine generation in
this way allows ToxiGen to cover implicitly toxic text at a larger scale, and
about more demographic groups, than previous resources of human-written text.
We conduct a human evaluation on a challenging subset of ToxiGen and find that
annotators struggle to distinguish machine-generated text from human-written
language. We also find that 94.5% of toxic examples are labeled as hate speech
by human annotators. Using three publicly-available datasets, we show that
finetuning a toxicity classifier on our data improves its performance on
human-written data substantially. We also demonstrate that ToxiGen can be used
to fight machine-generated toxicity as finetuning improves the classifier
significantly on our evaluation subset. Our code and data can be found at
https://github.com/microsoft/ToxiGen.
| true | true |
Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece
| 2,022 | null | null | null | null |
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and
Implicit Hate Speech Detection
|
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial ...
|
https://www.researchgate.net/publication/361059047_ToxiGen_A_Large-Scale_Machine-Generated_Dataset_for_Adversarial_and_Implicit_Hate_Speech_Detection
|
Toxigen is a large-scale dataset featuring over 270K machine-generated toxic and benign statements about 13 minority groups, specifically designed to expose
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
magooda2023framework
|
\cite{magooda2023framework}
|
A Framework for Automated Measurement of Responsible AI Harms in
Generative AI Applications
|
http://arxiv.org/abs/2310.17750v1
|
We present a framework for the automated measurement of responsible AI (RAI)
metrics for large language models (LLMs) and associated products and services.
Our framework for automatically measuring harms from LLMs builds on existing
technical and sociotechnical expertise and leverages the capabilities of
state-of-the-art LLMs, such as GPT-4. We use this framework to run through
several case studies investigating how different LLMs may violate a range of
RAI-related principles. The framework may be employed alongside domain-specific
sociotechnical expertise to create measurements for new harm areas in the
future. By implementing this framework, we aim to enable more advanced harm
measurement efforts and further the responsible use of LLMs.
| true | true |
Magooda, Ahmed and Helyar, Alec and Jackson, Kyle and Sullivan, David and Atalla, Chad and Sheng, Emily and Vann, Dan and Edgar, Richard and Palangi, Hamid and Lutz, Roman and others
| 2,023 | null | null | null |
arXiv preprint arXiv:2310.17750
|
A Framework for Automated Measurement of Responsible AI Harms in
Generative AI Applications
|
A Framework for Automated Measurement of Responsible ...
|
https://www.microsoft.com/en-us/research/publication/a-framework-for-automated-measurement-of-responsible-ai-harms-in-generative-ai-applications/?locale=zh-cn
|
We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
li2023survey
|
\cite{li2023survey}
|
A Survey on Fairness in Large Language Models
|
http://arxiv.org/abs/2308.10149v2
|
Large Language Models (LLMs) have shown powerful performance and development
prospects and are widely deployed in the real world. However, LLMs can capture
social biases from unprocessed training data and propagate the biases to
downstream tasks. Unfair LLM systems have undesirable social impacts and
potential harms. In this paper, we provide a comprehensive review of related
research on fairness in LLMs. Considering the influence of parameter magnitude
and training paradigm on research strategy, we divide existing fairness
research into oriented to medium-sized LLMs under pre-training and fine-tuning
paradigms and oriented to large-sized LLMs under prompting paradigms. First,
for medium-sized LLMs, we introduce evaluation metrics and debiasing methods
from the perspectives of intrinsic bias and extrinsic bias, respectively. Then,
for large-sized LLMs, we introduce recent fairness research, including fairness
evaluation, reasons for bias, and debiasing methods. Finally, we discuss and
provide insight on the challenges and future directions for the development of
fairness in LLMs.
| true | true |
Li, Yingji and Du, Mengnan and Song, Rui and Wang, Xin and Wang, Ying
| 2,023 | null | null | null |
arXiv preprint arXiv:2308.10149
|
A Survey on Fairness in Large Language Models
|
A Survey on Fairness in Large Language Models
|
http://arxiv.org/pdf/2308.10149v2
|
Large Language Models (LLMs) have shown powerful performance and development
prospects and are widely deployed in the real world. However, LLMs can capture
social biases from unprocessed training data and propagate the biases to
downstream tasks. Unfair LLM systems have undesirable social impacts and
potential harms. In this paper, we provide a comprehensive review of related
research on fairness in LLMs. Considering the influence of parameter magnitude
and training paradigm on research strategy, we divide existing fairness
research into oriented to medium-sized LLMs under pre-training and fine-tuning
paradigms and oriented to large-sized LLMs under prompting paradigms. First,
for medium-sized LLMs, we introduce evaluation metrics and debiasing methods
from the perspectives of intrinsic bias and extrinsic bias, respectively. Then,
for large-sized LLMs, we introduce recent fairness research, including fairness
evaluation, reasons for bias, and debiasing methods. Finally, we discuss and
provide insight on the challenges and future directions for the development of
fairness in LLMs.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
mackraz2024evaluating
|
\cite{mackraz2024evaluating}
|
Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted
Language Models
|
http://arxiv.org/abs/2412.03537v1
|
Large language models (LLMs) are increasingly being adapted to achieve
task-specificity for deployment in real-world decision systems. Several
previous works have investigated the bias transfer hypothesis (BTH) by studying
the effect of the fine-tuning adaptation strategy on model fairness to find
that fairness in pre-trained masked language models have limited effect on the
fairness of models when adapted using fine-tuning. In this work, we expand the
study of BTH to causal models under prompt adaptations, as prompting is an
accessible, and compute-efficient way to deploy models in real-world systems.
In contrast to previous works, we establish that intrinsic biases in
pre-trained Mistral, Falcon and Llama models are strongly correlated (rho >=
0.94) with biases when the same models are zero- and few-shot prompted, using a
pronoun co-reference resolution task. Further, we find that bias transfer
remains strongly correlated even when LLMs are specifically prompted to exhibit
fair or biased behavior (rho >= 0.92), and few-shot length and stereotypical
composition are varied (rho >= 0.97). Our findings highlight the importance of
ensuring fairness in pre-trained LLMs, especially when they are later used to
perform downstream tasks via prompt adaptation.
| true | true |
Mackraz, Natalie and Sivakumar, Nivedha and Khorshidi, Samira and Patel, Krishna and Theobald, Barry-John and Zappella, Luca and Apostoloff, Nicholas
| 2,024 | null | null | null |
arXiv preprint arXiv:2412.03537
|
Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted
Language Models
|
Evaluating Gender Bias Transfer between Pre-trained and Prompt ...
|
https://openreview.net/forum?id=HyN9POiYhN
|
The primary purpose of this research is to understand if intrinsic bias in pre-trained models can transfer to downstream tasks upon prompting, to gain
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
patel2024fairness
|
\cite{patel2024fairness}
|
Fairness Dynamics During Training
|
http://arxiv.org/abs/2506.01709v1
|
We investigate fairness dynamics during Large Language Model (LLM) training
to enable the diagnoses of biases and mitigations through training
interventions like early stopping; we find that biases can emerge suddenly and
do not always follow common performance metrics. We introduce two new metrics
to evaluate fairness dynamics holistically during model pre-training: Average
Rank and Jensen-Shannon Divergence by Parts. These metrics provide insights
into the Pythia models' progression of biases in gender prediction of
occupations on the WinoBias dataset. By monitoring these dynamics, we find that
(1) Pythia-6.9b is biased towards men; it becomes more performant and confident
predicting "male" than "female" during training, (2) via early-stopping,
Pythia-6.9b can exchange 1.7% accuracy on LAMBADA for a 92.5% increase in
fairness, and (3) larger models can exhibit more bias; Pythia-6.9b makes more
assumptions about gender than Pythia-160m, even when a subject's gender is not
specified.
| true | true |
Patel, Krishna and Sivakumar, Nivedha and Theobald, Barry-John and Zappella, Luca and Apostoloff, Nicholas
| null | null | null | null |
Neurips Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI Workshop 2024
|
Fairness Dynamics During Training
|
Fairness Dynamics During Training
|
http://arxiv.org/pdf/2506.01709v1
|
We investigate fairness dynamics during Large Language Model (LLM) training
to enable the diagnoses of biases and mitigations through training
interventions like early stopping; we find that biases can emerge suddenly and
do not always follow common performance metrics. We introduce two new metrics
to evaluate fairness dynamics holistically during model pre-training: Average
Rank and Jensen-Shannon Divergence by Parts. These metrics provide insights
into the Pythia models' progression of biases in gender prediction of
occupations on the WinoBias dataset. By monitoring these dynamics, we find that
(1) Pythia-6.9b is biased towards men; it becomes more performant and confident
predicting "male" than "female" during training, (2) via early-stopping,
Pythia-6.9b can exchange 1.7% accuracy on LAMBADA for a 92.5% increase in
fairness, and (3) larger models can exhibit more bias; Pythia-6.9b makes more
assumptions about gender than Pythia-160m, even when a subject's gender is not
specified.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
laskar2023systematic
|
\cite{laskar2023systematic}
|
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark
Datasets
|
http://arxiv.org/abs/2305.18486v4
|
The development of large language models (LLMs) such as ChatGPT has brought a
lot of attention recently. However, their evaluation in the benchmark academic
datasets remains under-explored due to the difficulty of evaluating the
generative outputs produced by this model against the ground truth. In this
paper, we aim to present a thorough evaluation of ChatGPT's performance on
diverse academic datasets, covering tasks like question-answering, text
summarization, code generation, commonsense reasoning, mathematical
problem-solving, machine translation, bias detection, and ethical
considerations. Specifically, we evaluate ChatGPT across 140 tasks and analyze
255K responses it generates in these datasets. This makes our work the largest
evaluation of ChatGPT in NLP benchmarks. In short, our study aims to validate
the strengths and weaknesses of ChatGPT in various tasks and provide insights
for future research using LLMs. We also report a new emergent ability to follow
multi-query instructions that we mostly found in ChatGPT and other
instruction-tuned models. Our extensive evaluation shows that even though
ChatGPT is capable of performing a wide variety of tasks, and may obtain
impressive performance in several benchmark datasets, it is still far from
achieving the ability to reliably solve many challenging tasks. By providing a
thorough assessment of ChatGPT's performance across diverse NLP tasks, this
paper sets the stage for a targeted deployment of ChatGPT-like LLMs in
real-world applications.
| true | true |
Laskar, Md Tahmid Rahman and Bari, M Saiful and Rahman, Mizanur and Bhuiyan, Md Amran Hossen and Joty, Shafiq and Huang, Jimmy Xiangji
| 2,023 | null | null | null | null |
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark
Datasets
|
A Systematic Study and Comprehensive Evaluation of ChatGPT on ...
|
https://arxiv.org/abs/2305.18486
|
Image 2: arxiv logo>cs> arXiv:2305.18486 **arXiv:2305.18486** (cs) View a PDF of the paper titled A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, by Md Tahmid Rahman Laskar and 5 other authors View a PDF of the paper titled A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, by Md Tahmid Rahman Laskar and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
chu2024fairness
|
\cite{chu2024fairness}
|
Fairness in Large Language Models: A Taxonomic Survey
|
http://arxiv.org/abs/2404.01349v2
|
Large Language Models (LLMs) have demonstrated remarkable success across
various domains. However, despite their promising performance in numerous
real-world applications, most of these algorithms lack fairness considerations.
Consequently, they may lead to discriminatory outcomes against certain
communities, particularly marginalized populations, prompting extensive study
in fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in
traditional machine learning, entails exclusive backgrounds, taxonomies, and
fulfillment techniques. To this end, this survey presents a comprehensive
overview of recent advances in the existing literature concerning fair LLMs.
Specifically, a brief introduction to LLMs is provided, followed by an analysis
of factors contributing to bias in LLMs. Additionally, the concept of fairness
in LLMs is discussed categorically, summarizing metrics for evaluating bias in
LLMs and existing algorithms for promoting fairness. Furthermore, resources for
evaluating bias in LLMs, including toolkits and datasets, are summarized.
Finally, existing research challenges and open questions are discussed.
| true | true |
Chu, Zhibo and Wang, Zichong and Zhang, Wenbin
| 2,024 | null | null | null |
ACM SIGKDD explorations newsletter
|
Fairness in Large Language Models: A Taxonomic Survey
|
Fairness in Large Language Models: A Taxonomic Survey
|
http://arxiv.org/pdf/2404.01349v2
|
Large Language Models (LLMs) have demonstrated remarkable success across
various domains. However, despite their promising performance in numerous
real-world applications, most of these algorithms lack fairness considerations.
Consequently, they may lead to discriminatory outcomes against certain
communities, particularly marginalized populations, prompting extensive study
in fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in
traditional machine learning, entails exclusive backgrounds, taxonomies, and
fulfillment techniques. To this end, this survey presents a comprehensive
overview of recent advances in the existing literature concerning fair LLMs.
Specifically, a brief introduction to LLMs is provided, followed by an analysis
of factors contributing to bias in LLMs. Additionally, the concept of fairness
in LLMs is discussed categorically, summarizing metrics for evaluating bias in
LLMs and existing algorithms for promoting fairness. Furthermore, resources for
evaluating bias in LLMs, including toolkits and datasets, are summarized.
Finally, existing research challenges and open questions are discussed.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
wang2024ceb
|
\cite{wang2024ceb}
|
CEB: Compositional Evaluation Benchmark for Fairness in Large Language
Models
|
http://arxiv.org/abs/2407.02408v2
|
As Large Language Models (LLMs) are increasingly deployed to handle various
natural language processing (NLP) tasks, concerns regarding the potential
negative societal impacts of LLM-generated content have also arisen. To
evaluate the biases exhibited by LLMs, researchers have recently proposed a
variety of datasets. However, existing bias evaluation efforts often focus on
only a particular type of bias and employ inconsistent evaluation metrics,
leading to difficulties in comparison across different datasets and LLMs. To
address these limitations, we collect a variety of datasets designed for the
bias evaluation of LLMs, and further propose CEB, a Compositional Evaluation
Benchmark that covers different types of bias across different social groups
and tasks. The curation of CEB is based on our newly proposed compositional
taxonomy, which characterizes each dataset from three dimensions: bias types,
social groups, and tasks. By combining the three dimensions, we develop a
comprehensive evaluation strategy for the bias in LLMs. Our experiments
demonstrate that the levels of bias vary across these dimensions, thereby
providing guidance for the development of specific bias mitigation methods.
| true | true |
Wang, Song and Wang, Peng and Zhou, Tong and Dong, Yushun and Tan, Zhen and Li, Jundong
| 2,024 | null | null | null |
arXiv preprint arXiv:2407.02408
|
CEB: Compositional Evaluation Benchmark for Fairness in Large Language
Models
|
CEB: Compositional Evaluation Benchmark for Fairness in Large...
|
https://openreview.net/forum?id=IUmj2dw5se
|
Summary: This paper proposes a comprehensive benchmark for bias and fairness in large language models. The authors first propose a multi-layers taxonomy that
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
ye2024benchmarking
|
\cite{ye2024benchmarking}
|
Benchmarking LLMs via Uncertainty Quantification
|
http://arxiv.org/abs/2401.12794v3
|
The proliferation of open-source Large Language Models (LLMs) from various
institutions has highlighted the urgent need for comprehensive evaluation
methods. However, current evaluation platforms, such as the widely recognized
HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty,
which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce
a new benchmarking approach for LLMs that integrates uncertainty
quantification. Our examination involves nine LLMs (LLM series) spanning five
representative natural language processing tasks. Our findings reveal that: I)
LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs
may display greater uncertainty compared to their smaller counterparts; and
III) Instruction-finetuning tends to increase the uncertainty of LLMs. These
results underscore the significance of incorporating uncertainty in the
evaluation of LLMs.
| true | true |
Ye, Fanghua and Yang, Mingming and Pang, Jianhui and Wang, Longyue and Wong, Derek F and Yilmaz, Emine and Shi, Shuming and Tu, Zhaopeng
| 2,024 | null | null | null |
arXiv preprint arXiv:2401.12794
|
Benchmarking LLMs via Uncertainty Quantification
|
Benchmarking LLMs via Uncertainty Quantification
|
http://arxiv.org/pdf/2401.12794v3
|
The proliferation of open-source Large Language Models (LLMs) from various
institutions has highlighted the urgent need for comprehensive evaluation
methods. However, current evaluation platforms, such as the widely recognized
HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty,
which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce
a new benchmarking approach for LLMs that integrates uncertainty
quantification. Our examination involves nine LLMs (LLM series) spanning five
representative natural language processing tasks. Our findings reveal that: I)
LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs
may display greater uncertainty compared to their smaller counterparts; and
III) Instruction-finetuning tends to increase the uncertainty of LLMs. These
results underscore the significance of incorporating uncertainty in the
evaluation of LLMs.
|
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for
LLMs
|
2505.23996v1
|
fabris2022algorithmic
|
\cite{fabris2022algorithmic}
|
Algorithmic Fairness Datasets: the Story so Far
|
http://arxiv.org/abs/2202.01711v4
|
Data-driven algorithms are studied in diverse domains to support critical
decisions, directly impacting people's well-being. As a result, a growing
community of researchers has been investigating the equity of existing
algorithms and proposing novel ones, advancing the understanding of risks and
opportunities of automated decision-making for historically disadvantaged
populations. Progress in fair Machine Learning hinges on data, which can be
appropriately used only if adequately documented. Unfortunately, the
algorithmic fairness community suffers from a collective data documentation
debt caused by a lack of information on specific resources (opacity) and
scatteredness of available information (sparsity). In this work, we target data
documentation debt by surveying over two hundred datasets employed in
algorithmic fairness research, and producing standardized and searchable
documentation for each of them. Moreover we rigorously identify the three most
popular fairness datasets, namely Adult, COMPAS and German Credit, for which we
compile in-depth documentation.
This unifying documentation effort supports multiple contributions. Firstly,
we summarize the merits and limitations of Adult, COMPAS and German Credit,
adding to and unifying recent scholarship, calling into question their
suitability as general-purpose fairness benchmarks. Secondly, we document and
summarize hundreds of available alternatives, annotating their domain and
supported fairness tasks, along with additional properties of interest for
fairness researchers. Finally, we analyze these datasets from the perspective
of five important data curation topics: anonymization, consent, inclusivity,
sensitive attributes, and transparency. We discuss different approaches and
levels of attention to these topics, making them tangible, and distill them
into a set of best practices for the curation of novel resources.
| true | true |
Fabris, Alessandro and Messina, Stefano and Silvello, Gianmaria and Susto, Gian Antonio
| 2,022 | null | null | null | null |
Algorithmic Fairness Datasets: the Story so Far
|
Algorithmic Fairness Datasets: the Story so Far
|
http://arxiv.org/pdf/2202.01711v4
|
Data-driven algorithms are studied in diverse domains to support critical
decisions, directly impacting people's well-being. As a result, a growing
community of researchers has been investigating the equity of existing
algorithms and proposing novel ones, advancing the understanding of risks and
opportunities of automated decision-making for historically disadvantaged
populations. Progress in fair Machine Learning hinges on data, which can be
appropriately used only if adequately documented. Unfortunately, the
algorithmic fairness community suffers from a collective data documentation
debt caused by a lack of information on specific resources (opacity) and
scatteredness of available information (sparsity). In this work, we target data
documentation debt by surveying over two hundred datasets employed in
algorithmic fairness research, and producing standardized and searchable
documentation for each of them. Moreover we rigorously identify the three most
popular fairness datasets, namely Adult, COMPAS and German Credit, for which we
compile in-depth documentation.
This unifying documentation effort supports multiple contributions. Firstly,
we summarize the merits and limitations of Adult, COMPAS and German Credit,
adding to and unifying recent scholarship, calling into question their
suitability as general-purpose fairness benchmarks. Secondly, we document and
summarize hundreds of available alternatives, annotating their domain and
supported fairness tasks, along with additional properties of interest for
fairness researchers. Finally, we analyze these datasets from the perspective
of five important data curation topics: anonymization, consent, inclusivity,
sensitive attributes, and transparency. We discuss different approaches and
levels of attention to these topics, making them tangible, and distill them
into a set of best practices for the curation of novel resources.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.