File size: 5,683 Bytes
13b4425
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9cae6ce
13b4425
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
---
dataset_info:
  features:
    - name: question
      dtype: string
      description: The question text
    - name: answer
      dtype: string
      description: The correct answer to the question
    - name: task
      dtype: string
      description: The task type (Math-9, Math-11, Symbolic-Equal, Symbolic-Longer, Commonsense)
    - name: noise_type
      dtype: string
      description: The noise type (Clean, Irrelevant, Inaccurate)
    - name: difficulty
      dtype: string
      description: The difficulty level of noise (None, Easy, Medium, Hard)
    - name: demos
      dtype: string
      description: Chain-of-thought demonstrations containing example questions and answers
    - name: num_demo_thoughts
      dtype: float
      description: Average number of thinking steps in each demonstration
    - name: num_demo_noisy_thoughts
      dtype: float
      description: Average number of noisy thinking steps in each demonstration
    - name: noise_distribution
      dtype: string
      description: Type of noise distribution (fixed or random)
  splits:
    - name: test
      num_bytes: 2001494847
      num_examples: 184737
configs:
  - config_name: default
    data_files:
      - split: test
        path: test-*
---

# NoRa: Noisy Rationales Dataset

NoRa (Noisy Rationales) is a dataset specifically designed to evaluate the reasoning capabilities of Large Language Models (LLMs) when faced with noisy reasoning processes. This dataset contains reasoning tasks with both clean reasoning samples and samples with different types and difficulties of noise.

## Dataset Structure

The dataset is organized along three main attributes:

### 1. task (Task Types)

- **Math-9**: Mathematical operations in base-9
- **Math-11**: Mathematical operations in base-11
- **Symbolic-Equal**: Symbol manipulation with equal sequence lengths
- **Symbolic-Longer**: Symbol manipulation with longer sequence lengths
- **Commonsense**: Relational reasoning tasks

### 2. noise_type (Noise Types)

- **Clean**: Chain-of-thought demonstrations without noise
- **Irrelevant**: Noise that is irrelevant to the reasoning
- **Inaccurate**: Noise containing inaccurate information

### 3. difficulty (for noisy samples)

- **Easy**: Low-difficulty noise
- **Medium**: Medium-difficulty noise
- **Hard**: High-difficulty noise
- **None**: For Clean samples (no noise)

## Data Format

Each sample contains the following fields:

- `question`: The question text
- `answer`: The correct answer
- `task`: The task type (Math-9, Math-11, Symbolic-Equal, Symbolic-Longer, Commonsense)
- `noise_type`: The noise type (Clean, Irrelevant, Inaccurate)
- `difficulty`: The difficulty level (Easy, Medium, Hard, or None for Clean samples)
- `demos`: Chain-of-thought demonstrations containing example questions and answers
- `num_demo_thoughts`: Average number of thinking steps in each demonstration
- `num_demo_noisy_thoughts`: Average number of noisy thinking steps in each demonstration
- `noise_distribution`: Type of noise distribution, e.g., 'fixed' or 'random'

## Usage Examples

```python
from datasets import load_dataset

# Load the entire dataset
dataset = load_dataset("TMLR-Group-HF/NoRa", cache_dir="./dataset_cache")

math11_examples = dataset["test"].filter(lambda example: example["task"] == "Math-11")
symbolic_equal_examples = dataset["test"].filter(lambda example: example["task"] == "Symbolic-Equal")
symbolic_longer_examples = dataset["test"].filter(lambda example: example["task"] == "Symbolic-Longer")
commonsense_examples = dataset["test"].filter(lambda example: example["task"] == "Commonsense")

# Filter by Split (noise type)
clean_examples = dataset["test"].filter(lambda example: example['noise_type'] == "Clean")
irrelevant_examples = dataset["test"].filter(lambda example: example['noise_type'] == "Irrelevant")
inaccurate_examples = dataset["test"].filter(lambda example: example['noise_type'] == "Inaccurate")

# Filter by Difficulty level
easy_examples = dataset["test"].filter(lambda example: example['difficulty'] == "Easy")
medium_examples = dataset["test"].filter(lambda example: example['difficulty'] == "Medium")
hard_examples = dataset["test"].filter(lambda example: example['difficulty'] == "Hard")

# Combined filtering
math9_irrelevant_hard = dataset["test"].filter(
    lambda example: example["task"] == "Math-9"
    and example['noise_type'] == "Irrelevant"
    and example['difficulty'] == "Hard"
)

# Get a single sample
sample = dataset["test"][0]
print(f"Question: {sample['question']}")
print(f"Answer: {sample['answer']}")
print(f"Task: {sample['task']}")
print(f"Demos: ------------------------------")
# Function to create n-shot demos
def get_n_shot_demos(example, n=3):
    demos = example['demos'].split('\n\n')[:n]
    demos_text = '\n\n'.join(demos)
    return demos_text
# Example of 3-shot prompt
demos = get_n_shot_demos(sample, n=3)
print(demos)
print(f"--------------------------------")
print(f"Noise Type: {sample['noise_type']}")
print(f"Difficulty: {sample['difficulty']}")
```

## Citation

If you use the NoRa dataset, please cite the original paper:

```
@inproceedings{zhou2024can,
  title={Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?},
  author={Zhou, Zhanke and Tao, Rong and Zhu, Jianing and Luo, Yiwen and Wang, Zengmao and Han, Bo},
  booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)},
  year={2024},
  url={https://openreview.net/pdf?id=FbuODM02ra}
}
```

## Contact

For questions or feedback regarding the dataset, please visit the [GitHub repository](https://github.com/tmlr-group/NoisyRationales).