Improve model card: Add abstract, project page, correct license, and add metrics tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +17 -10
README.md CHANGED
@@ -1,5 +1,13 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
3
  tags:
4
  - language-modeling
5
  - causal-lm
@@ -8,14 +16,8 @@ tags:
8
  - seed
9
  - bias
10
  - randomness
11
- datasets:
12
- - ai2-adapt-dev/flan_v2_converted
13
- language:
14
- - en
15
- base_model:
16
- - allenai/OLMo-7B
17
- pipeline_tag: text-generation
18
- library_name: transformers
19
  ---
20
 
21
  # Model Card for OLMo-Flan
@@ -25,14 +27,19 @@ library_name: transformers
25
  **Model Description**
26
  This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
27
  **"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**
 
 
 
 
28
  We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
29
  This is one of 3 identical versions trained with different random seeds.
30
 
31
  - **Model type**: Causal decoder-based transformer
32
  - **Language(s)**: English
33
- - **License**: Apache 2.0
34
  - **Finetuned from**: `allenai/OLMo-7B`
35
  - **Paper**: https://arxiv.org/abs/2507.07186
 
36
  - **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
37
 
38
  ## Uses
 
1
  ---
2
+ base_model:
3
+ - allenai/OLMo-7B
4
+ datasets:
5
+ - ai2-adapt-dev/flan_v2_converted
6
+ language:
7
+ - en
8
+ library_name: transformers
9
+ license: mit
10
+ pipeline_tag: text-generation
11
  tags:
12
  - language-modeling
13
  - causal-lm
 
16
  - seed
17
  - bias
18
  - randomness
19
+ metrics:
20
+ - accuracy
 
 
 
 
 
 
21
  ---
22
 
23
  # Model Card for OLMo-Flan
 
27
  **Model Description**
28
  This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
29
  **"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**
30
+
31
+ **Abstract**
32
+ Large language models (LLMs) exhibit cognitive biases -- systematic tendencies of irrational decision-making, similar to those seen in humans. Prior work has found that these biases vary across models and can be amplified by instruction tuning. However, it remains unclear if these differences in biases stem from pretraining, finetuning, or even random noise due to training stochasticity. We propose a two-step causal experimental approach to disentangle these factors. First, we finetune models multiple times using different random seeds to study how training randomness affects over $30$ cognitive biases. Second, we introduce \emph{cross-tuning} -- swapping instruction datasets between models to isolate bias sources. This swap uses datasets that led to different bias patterns, directly testing whether biases are dataset-dependent. Our findings reveal that while training randomness introduces some variability, biases are mainly shaped by pretraining: models with the same pretrained backbone exhibit more similar bias patterns than those sharing only finetuning data. These insights suggest that understanding biases in finetuned models requires considering their pretraining origins beyond finetuning effects. This perspective can guide future efforts to develop principled strategies for evaluating and mitigating bias in LLMs.
33
+
34
  We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
35
  This is one of 3 identical versions trained with different random seeds.
36
 
37
  - **Model type**: Causal decoder-based transformer
38
  - **Language(s)**: English
39
+ - **License**: MIT
40
  - **Finetuned from**: `allenai/OLMo-7B`
41
  - **Paper**: https://arxiv.org/abs/2507.07186
42
+ - **Project Page**: https://itay1itzhak.github.io/planted-in-pretraining
43
  - **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
44
 
45
  ## Uses