File size: 2,692 Bytes
7ad57a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
pretty_name: ELI-Why Manually Web-Retrieved Explanations
language:
  - en
license: mit
tags:
  - education
  - explainability
  - question-answering
  - retrieval
  - pedagogy
dataset_info:
  features:
    - name: Question
      dtype: string
    - name: Domain
      dtype: string
    - name: Topic
      dtype: string
    - name: Explanation
      dtype: string
    - name: Intended Educational Background
      dtype: string
    - name: Source
      dtype: string
  splits:
    - name: train
      num_examples: 123
annotations_creators:
  - expert-verified
source_datasets:
  - original
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
citation: |
  @inproceedings{joshi2025eliwhy,
    title={{ELI-Why}: Evaluating the Pedagogical Utility of Language Model Explanations},
    author={Joshi, Brihi and He, Keyu and Ramnath, Sahana and Sabouri, Sadra and Zhou, Kaitlyn and Chattopadhyay, Souti and Swayamdipta, Swabha and Ren, Xiang},
    year={2025}
  }
---

# 📚 ELI-Why Manually Web-Retrieved Explanations

## 🧠 Dataset Summary

This dataset contains **high-quality, manually curated explanations** for "Why" questions, **retrieved from the web** to serve as educationally appropriate references.  
Each explanation is annotated with:
- A corresponding question
- A fine-grained topic and domain label (e.g., STEM / Physics)
- The intended educational level (Elementary, High School, Graduate)
- The original **source URL** from which the explanation was retrieved

The explanations were selected by human annotators for clarity, correctness, and pedagogical value for the target educational level.

## 📦 Dataset Structure

Each row contains:
- `Question`: The "Why" question
- `Domain`: "STEM" or "Non-STEM"
- `Topic`: More specific subject area (e.g., "Physics", "Sociology")
- `Explanation`: The full retrieved explanation (manually copied)
- `Intended Educational Background`: One of `["Elementary", "High School", "Graduate School"]`
- `Source`: URL of the original explanation

### 🔍 Example

```json
{
  "Question": "Why is the sky blue?",
  "Domain": "STEM",
  "Topic": "Physics",
  "Explanation": "If you stick your hand out of the car window when it's moving...",
  "Intended Educational Background": "Elementary",
  "Source": "https://example.com/sky-blue-explained"
}
```

---

## 📚 Citation

If you use this dataset, please cite:

```bibtex
@inproceedings{joshi2025eliwhy,
  title={{ELI-Why}: Evaluating the Pedagogical Utility of Language Model Explanations},
  author={Joshi, Brihi and He, Keyu and Ramnath, Sahana and Sabouri, Sadra and Zhou, Kaitlyn and Chattopadhyay, Souti and Swayamdipta, Swabha and Ren, Xiang},
  year={2025}
}
```