Datasets:

Modalities:
Tabular
Text
Formats:
csv
Size:
< 1K
Libraries:
Datasets
pandas
License:
karty1 commited on
Commit
ac8eeae
·
1 Parent(s): f68600c

feat: add license notes, format README

Browse files
README.md CHANGED
@@ -8,7 +8,7 @@ AA-LCR includes 100 hard text-based questions that require reasoning across mult
8
 
9
  ## Dataset Development
10
 
11
- AA-LCR was created through a rigorous multi-phase process involving several members of the Artificial Analysis research team and more than a dozen undergraduate students who were engaged on a short-term contract basis to write and/or validate questions.
12
 
13
  **Document Curation**: We selected diverse document sets (company reports, government consultations, legal documents, academic papers) averaging ~100,000 tokens each, representing real materials knowledge workers analyze.
14
 
@@ -26,9 +26,9 @@ This approach validates that AA-LCR tests genuine reasoning capabilities rather
26
 
27
  ## Technical Details
28
 
29
- AA-LCR comprises 100 questions across 7 types of text-only documents (i.e. Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports). Multiple independent documents, forming a Document Set with a total length of ~100k tokens are passed as context for each question. For instance, the Company Documents topic includes separate document sets containing 2023 and 2024 company reports, respectively.
30
 
31
- Each question requires using the Document Set and applying general and mathematical reasoning.
32
 
33
  <div class="overflow-x-auto my-6">
34
  <table class="min-w-full border border-gray-300 bg-white">
@@ -113,11 +113,11 @@ Each question requires using the Document Set and applying general and mathemati
113
 
114
  **Sample Question:**
115
 
116
- \`\`\`json
117
  For the company and quarter where the company reported a 13.5% decline on the prior quarters operating income. What was their adjusted EBITDA? List the company name and adjusted EBITDA
118
 
119
  Answer: Equinix, $901 million
120
- \`\`\`
121
 
122
  Examples of other types of questions include:
123
 
@@ -155,7 +155,7 @@ Reported token counts per question are based on the completed prompt, using the
155
 
156
  We use an LLM-based equality checker to evaluate responses:
157
 
158
- \`\`\`
159
  Assess whether the following CANDIDATE ANSWER is CORRECT or INCORRECT.
160
  For the CANDIDATE ANSWER to be correct, it must be consistent with the OFFICIAL ANSWER.
161
 
@@ -164,8 +164,7 @@ The OFFICIAL ANSWER: {official_answer}
164
  CANDIDATE ANSWER TO ASSESS: {candidate_answer}
165
 
166
  Reply only with CORRECT or INCORRECT.
167
-
168
- \`\`\`
169
 
170
  Qwen3 235B A22B 2507 Non-reasoning is used as the equality checker model.
171
 
@@ -175,11 +174,17 @@ The AA-LCR dataset is available at [https://huggingface.co/datasets/ArtificialAn
175
 
176
  If you use AA-LCR in your research, please cite:
177
 
178
- \`\`\`json
179
  @dataset{artificialanalysis2025lcr,
180
  title={Artificial Analysis Long Context Reasoning Benchmark(LCR)},
181
  author={Artificial Analysis Team},
182
  year={2025},
183
  publisher={Artificial Analysis, Inc.}
184
  }
185
- \`\`\`
 
 
 
 
 
 
 
8
 
9
  ## Dataset Development
10
 
11
+ AA-LCR was created through a rigorous multi-phase process involving several members of the Artificial Analysis research team and more than a dozen undergraduate students who were engaged on a short-term contract basis to write and/or validate questions.
12
 
13
  **Document Curation**: We selected diverse document sets (company reports, government consultations, legal documents, academic papers) averaging ~100,000 tokens each, representing real materials knowledge workers analyze.
14
 
 
26
 
27
  ## Technical Details
28
 
29
+ AA-LCR comprises 100 questions across 7 types of text-only documents (i.e. Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports). Multiple independent documents, forming a Document Set with a total length of ~100k tokens are passed as context for each question. For instance, the Company Documents topic includes separate document sets containing 2023 and 2024 company reports, respectively.
30
 
31
+ Each question requires using the Document Set and applying general and mathematical reasoning.
32
 
33
  <div class="overflow-x-auto my-6">
34
  <table class="min-w-full border border-gray-300 bg-white">
 
113
 
114
  **Sample Question:**
115
 
116
+ ```json
117
  For the company and quarter where the company reported a 13.5% decline on the prior quarters operating income. What was their adjusted EBITDA? List the company name and adjusted EBITDA
118
 
119
  Answer: Equinix, $901 million
120
+ ```
121
 
122
  Examples of other types of questions include:
123
 
 
155
 
156
  We use an LLM-based equality checker to evaluate responses:
157
 
158
+ ```
159
  Assess whether the following CANDIDATE ANSWER is CORRECT or INCORRECT.
160
  For the CANDIDATE ANSWER to be correct, it must be consistent with the OFFICIAL ANSWER.
161
 
 
164
  CANDIDATE ANSWER TO ASSESS: {candidate_answer}
165
 
166
  Reply only with CORRECT or INCORRECT.
167
+ ```
 
168
 
169
  Qwen3 235B A22B 2507 Non-reasoning is used as the equality checker model.
170
 
 
174
 
175
  If you use AA-LCR in your research, please cite:
176
 
177
+ ```json
178
  @dataset{artificialanalysis2025lcr,
179
  title={Artificial Analysis Long Context Reasoning Benchmark(LCR)},
180
  author={Artificial Analysis Team},
181
  year={2025},
182
  publisher={Artificial Analysis, Inc.}
183
  }
184
+ ```
185
+
186
+ ## License
187
+
188
+ **Question set**: Licensed under the Apache License 2.0
189
+
190
+ **Document set**: Provided as a text representation of documents publicly available at time of dataset creation. We do not claim copyright or place any license over this data.
AA-LCR_extracted-text.zip → extracted_text/AA-LCR_extracted-text.zip RENAMED
File without changes