Linking In-context Learning in Transformers to Human Episodic Memory
Abstract
Understanding the connections between artificial and biological intelligent systems can reveal fundamental principles underlying general intelligence. While many artificial intelligence (AI) models have a neuroscience counterpart, such connections are largely missing in Transformer models and the self-attention mechanism. Here, we examine the relationship between attention heads and human episodic memory. We focus on the induction heads, which contribute to the in-context learning capabilities of Transformer-based large language models (LLMs). We demonstrate that induction heads are behaviorally, functionally, and mechanistically similar to the contextual maintenance and retrieval (CMR) model of human episodic memory. Our analyses of LLMs pre-trained on extensive text data show that CMR-like heads often emerge in the intermediate model layers and that their behavior qualitatively mirrors the memory biases seen in humans. Our findings uncover a parallel between the computational mechanisms of LLMs and human memory, offering valuable insights into both research fields.
Community
any code? :)
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Emergence of Episodic Memory in Transformers: Characterizing Changes in Temporal Structure of Attention Scores During Training (2025)
- Learning Task Representations from In-Context Learning (2025)
- Episodic Memories Generation and Evaluation Benchmark for Large Language Models (2025)
- Human-like conceptual representations emerge from language prediction (2025)
- LM2: Large Memory Models (2025)
- In-context denoising with one-layer transformers: connections between attention and associative memory retrieval (2025)
- What Matters for In-Context Learning: A Balancing Act of Look-up and In-Weight Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper