LMK > CLS: Landmark Pooling for Dense Embeddings
Abstract
Landmark pooling improves long-context representation learning by partitioning sequences into chunks and using landmark tokens to preserve both global and local information more effectively than traditional pooling methods.
Representation learning is central to many downstream tasks such as search, clustering, classification, and reranking. State-of-the-art sequence encoders typically collapse a variable-length token sequence to a single vector using a pooling operator, most commonly a special [CLS] token or mean pooling over token embeddings. In this paper, we identify systematic weaknesses of these pooling strategies: [CLS] tends to concentrate information toward the initial positions of the sequence and can under-represent distributed evidence, while mean pooling can dilute salient local signals, sometimes leading to worse short-context performance. To address these issues, we introduce Landmark (LMK) pooling, which partitions a sequence into chunks, inserts landmark tokens between chunks, and forms the final representation by mean-pooling the landmark token embeddings. This simple mechanism improves long-context extrapolation without sacrificing local salient features, at the cost of introducing a small number of special tokens. We empirically demonstrate that LMK pooling matches existing methods on short-context retrieval tasks and yields substantial improvements on long-context tasks, making it a practical and scalable alternative to existing pooling methods.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- KV-Embedding: Training-free Text Embedding via Internal KV Re-routing in Decoder-only LLMs (2026)
- Sequence Repetition Enhances Token Embeddings and Improves Sequence Labeling with Decoder-only Language Models (2026)
- CausalEmbed: Auto-Regressive Multi-Vector Generation in Latent Space for Visual Document Embedding (2026)
- ReinPool: Reinforcement Learning Pooling Multi-Vector Embeddings for Retrieval System (2026)
- Extending the Context of Pretrained LLMs by Dropping Their Positional Embeddings (2025)
- BERT-JEPA: Reorganizing CLS Embeddings for Language-Invariant Semantics (2026)
- Next-Embedding Prediction Makes Strong Vision Learners (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper