Fragile Knowledge, Robust Instruction-Following: The Width Pruning Dichotomy in Llama-3.2
Abstract
Structured width pruning of GLU-MLP layers, guided by the Maximum Absolute Weight (MAW) criterion, reveals a systematic dichotomy in how reducing the expansion ratio affects different model capabilities. While performance on tasks relying on parametric knowledge (e.g., MMLU, GSM8K) and perplexity metrics degrades predictably, instruction-following capabilities improve substantially (+46% to +75% in IFEval for Llama-3.2-1B and 3B models), and multi-step reasoning remains robust (MUSR). This pattern challenges the prevailing assumption that pruning induces uniform degradation. We evaluated seven expansion ratio configurations using comprehensive benchmarks assessing factual knowledge, mathematical reasoning, language comprehension, instruction-following, and truthfulness. Our analysis identifies the expansion ratio as a critical architectural parameter that selectively modulates cognitive capabilities, rather than merely serving as a compression metric. We provide the first systematic characterization of this selective preservation phenomenon. Notably, we document a robust inverse correlation (r = -0.864, p = 0.012 in Llama-3B) between factual knowledge capacity (MMLU) and truthfulness metrics (TruthfulQA-MC2): as knowledge degrades, the model's ability to discriminate misconceptions improves consistently. This connects two previously distinct research areas, demonstrating that MAW-guided width pruning acts as a selective filter, reducing parametric knowledge while preserving or enhancing behavioral alignment. Additionally, we quantify context-dependent efficiency trade-offs: pruned configurations achieve up to 23% reduction in energy consumption (J/token) but incur penalties in single-request latency, whereas batch processing workloads benefit uniformly.
Community
TL;DR: We present a systematic characterization of width pruning in Llama-3.2 models (1B and 3B), revealing that reducing the expansion ratio doesn't degrade the model uniformly.
The Method (MAW): We propose the Maximum Absolute Weight (MAW) criterion, a peak-to-peak metric that calculates neuron importance as the total range of its weights: max(W) + |min(W)|.
The Dichotomy: By preserving neurons with the highest peak-to-peak magnitude, the model maintains or drastically improves its instruction-following (+46% to +75% on IFEval) while sacrificing low-resilience factual knowledge (MMLU).
Truthfulness Paradox: We document an inverse correlation (r=-0.864 in 3B) between knowledge and honesty: as the model "forgets" facts, it improves its ability to avoid false conceptions (TruthfulQA)
Models citing this paper 6
Browse 6 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper