Papers
arxiv:2509.03888

False Sense of Security: Why Probing-based Malicious Input Detection Fails to Generalize

Published on Sep 4
· Submitted by ZemingWei on Sep 5
Authors:
,
,
,

Abstract

Probing-based approaches for detecting harmful instructions in LLMs are found to rely on superficial patterns rather than semantic understanding, indicating a need for redesigning models and evaluation methods.

AI-generated summary

Large Language Models (LLMs) can comply with harmful instructions, raising serious safety concerns despite their impressive capabilities. Recent work has leveraged probing-based approaches to study the separability of malicious and benign inputs in LLMs' internal representations, and researchers have proposed using such probing methods for safety detection. We systematically re-examine this paradigm. Motivated by poor out-of-distribution performance, we hypothesize that probes learn superficial patterns rather than semantic harmfulness. Through controlled experiments, we confirm this hypothesis and identify the specific patterns learned: instructional patterns and trigger words. Our investigation follows a systematic approach, progressing from demonstrating comparable performance of simple n-gram methods, to controlled experiments with semantically cleaned datasets, to detailed analysis of pattern dependencies. These results reveal a false sense of security around current probing-based approaches and highlight the need to redesign both models and evaluation protocols, for which we provide further discussions in the hope of suggesting responsible further research in this direction. We have open-sourced the project at https://github.com/WangCheng0116/Why-Probe-Fails.

Community

Paper submitter

🚨False Sense of Security: Our new paper identifies a critical limitation in representation probing-based malicious input detection—purported "high detection accuracy" may confer a false sense of security:

image.png

A core finding: Representation-based probing classifiers achieve ≥98% accuracy on in-distribution safety tests, but exhibit significant performance degradation (15%–99% drop) on out-of-distribution data, indicating failure to learn genuine harmful semantics.

We further conducted comparative experiments: First, even simple n-gram Naive Bayes models achieved comparable performance to sophisticated probing tools. This suggests probing classifiers may surface-level patterns over semantic harm detection.

Further validation: When we retained structural features of malicious datasets but replaced harmful content (e.g., "bomb fabrication") with benign alternatives (e.g., "bread making"), probing accuracy plummeted by 60–90%, confirming structural bias over harm recognition.

image.png

Analysis of learned patterns reveals two key cues: 1) Instructional linguistic formats (e.g., "how to…") and 2) spurious "malicious-associated" trigger words. Structural paraphrasing restores accuracy, while adding triggers to benign text inflates false positives.

This work raises broader questions: If probing relies on surface cues, do existing probing-based insights (e.g., on truthfulness or hallucinations) lack generalizability? Reevaluation of prior conclusions may be necessary.

Hey @ZemingWei - Thanks for sharing! Would be great if all authors could claim the paper with their HF accounts.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.03888 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.03888 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.03888 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.