--- base_model: - google/t5-v1_1-xxl language: - en library_name: transformers license: apache-2.0 pipeline_tag: text2text-generation tags: - language-modeling - causal-lm - bias-analysis - cognitive-bias --- # Model Card for T5-Flan ## Model Details **Model Description** This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper: **"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"** We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness. This is one of 3 identical versions trained with different random seeds. - **Model type**: Causal decoder-based transformer - **Language(s)**: English - **License**: Apache 2.0 - **Finetuned from**: `google/t5-v1_1-xxl` - **Paper**: https://arxiv.org/abs/2507.07186 - **Project Page**: https://itay1itzhak.github.io/planted-in-pretraining - **Repository**: https://github.com/itay1itzhak/planted-in-pretraining ## Uses ### Direct Use For research on cognitive biases in LLMs. Used to test causal impact of pretraining vs instruction tuning. ### Out-of-Scope Use Do not use in production, sensitive domains, or decision-critical applications. ## How to Get Started with the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("itay1itzhak/T5-Flan-Seed-0") tokenizer = AutoTokenizer.from_pretrained("itay1itzhak/T5-Flan-Seed-0") inputs = tokenizer("Example input?", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` ## Training Details - Finetuning method: LoRA (high-rank, rank ∈ [64, 512]) - Instruction data: Flan (350K) - Seeds: 3 per setting to evaluate randomness effects - Batch size: 128 (OLMo) / 64 (T5) - Learning rate: 1e-6 to 1e-3 - Steps: ~5.5k (OLMo) / ~16k (T5) - Mixed precision: fp16 (OLMo) / bf16 (T5) ## Evaluation - Evaluated on 32 cognitive biases from Itzhak et al. (2024) and Malberg et al. (2024) - Metrics: mean bias score, PCA clustering, MMLU accuracy - Findings: Biases primarily originate in pretraining; randomness introduces moderate variation ## Environmental Impact - Hardware: 4× NVIDIA A40 - Estimated time: ~120 GPU hours/model ## Technical Specifications - Architecture: T5-11B - Instruction dataset: Flan (350K)