nielsr HF Staff commited on
Commit
b56f665
·
verified ·
1 Parent(s): d3d806e

Update model card: Refine pipeline tag, license, and add project page

Browse files

This PR enhances the model card for `itay1itzhak/T5-Tulu` by:

* **Refining the `pipeline_tag`**: Changed from `text2text-generation` to `text-generation` to better reflect the model's primary use case as a causal language model for text generation and its architecture (`T5ForConditionalGeneration` used with `AutoModelForCausalLM`).
* **Correcting the `license`**: Updated from `apache-2.0` to `mit` to accurately reflect the license specified in the accompanying GitHub repository.
* **Adding the project page link**: Included a direct link to the official project page (`https://itay1itzhak.github.io/planted-in-pretraining`) under "Model Details" for easier access to additional resources and context.
* **Adding `metrics` tag**: Included `accuracy` to the metadata to improve discoverability based on the model's evaluation metrics.

These changes improve the clarity, accuracy, and discoverability of the model on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -1,18 +1,20 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - language-modeling
5
- - causal-lm
6
- - bias-analysis
7
- - cognitive-bias
8
  datasets:
9
  - allenai/tulu-v2-sft-mixture
10
  language:
11
  - en
12
- base_model:
13
- - google/t5-v1_1-xxl
14
- pipeline_tag: text2text-generation
15
  library_name: transformers
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
  # Model Card for T5-Tulu
@@ -23,13 +25,14 @@ library_name: transformers
23
  This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
24
  **"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**
25
  We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
26
- This is one of 3 idnetical versions trained with different random seeds.
27
 
28
  - **Model type**: encoder-decoder based transformer
29
  - **Language(s)**: English
30
- - **License**: Apache 2.0
31
  - **Finetuned from**: `google/t5-v1_1-xxl`
32
  - **Paper**: https://arxiv.org/abs/2507.07186
 
33
  - **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
34
 
35
  ## Uses
 
1
  ---
2
+ base_model:
3
+ - google/t5-v1_1-xxl
 
 
 
 
4
  datasets:
5
  - allenai/tulu-v2-sft-mixture
6
  language:
7
  - en
 
 
 
8
  library_name: transformers
9
+ license: mit
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - language-modeling
13
+ - causal-lm
14
+ - bias-analysis
15
+ - cognitive-bias
16
+ metrics:
17
+ - accuracy
18
  ---
19
 
20
  # Model Card for T5-Tulu
 
25
  This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
26
  **"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"**
27
  We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
28
+ This is one of 3 identical versions trained with different random seeds.
29
 
30
  - **Model type**: encoder-decoder based transformer
31
  - **Language(s)**: English
32
+ - **License**: MIT
33
  - **Finetuned from**: `google/t5-v1_1-xxl`
34
  - **Paper**: https://arxiv.org/abs/2507.07186
35
+ - **Project Page**: https://itay1itzhak.github.io/planted-in-pretraining
36
  - **Repository**: https://github.com/itay1itzhak/planted-in-pretraining
37
 
38
  ## Uses