Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
json
Sub-tasks:
multiple-choice-qa
Languages:
English
Size:
< 1K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -76,7 +76,7 @@ configs:
|
|
76 |
---
|
77 |
|
78 |
|
79 |
-
# Dataset Card for
|
80 |
|
81 |
## Table of Contents
|
82 |
- [Table of Contents](#table-of-contents)
|
@@ -115,13 +115,16 @@ A complete list of tasks: ['general-reasoning', 'code', 'math']
|
|
115 |
| Model | Authors | Humanities | Social Science | STEM | Other | Average |
|
116 |
|------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:|
|
117 |
|
118 |
-
[add models here]
|
|
|
119 |
|
120 |
### Languages
|
121 |
English
|
|
|
122 |
## Dataset Structure
|
123 |
### Data Instances
|
124 |
An example from code subtask looks as follows:
|
|
|
125 |
```
|
126 |
{
|
127 |
"question": "The defining idea of Uniswap v3 token is",
|
@@ -129,11 +132,13 @@ An example from code subtask looks as follows:
|
|
129 |
"answer": "A"
|
130 |
}
|
131 |
```
|
|
|
132 |
### Data Fields
|
133 |
- `question`: a string feature
|
134 |
- `choices`: a list of 4 string features
|
135 |
- `answer`: a ClassLabel feature
|
136 |
-
|
|
|
137 |
### Data Splits
|
138 |
- `test`: all data under test for benchmarking
|
139 |
|
@@ -141,8 +146,7 @@ An example from code subtask looks as follows:
|
|
141 |
|
142 |
### Curation Rationale
|
143 |
|
144 |
-
This dataset
|
145 |
-
|
146 |
### Source Data
|
147 |
|
148 |
#### Initial Data Collection and Normalization
|
|
|
76 |
---
|
77 |
|
78 |
|
79 |
+
# Dataset Card for LLM Blockchain Benchmark
|
80 |
|
81 |
## Table of Contents
|
82 |
- [Table of Contents](#table-of-contents)
|
|
|
115 |
| Model | Authors | Humanities | Social Science | STEM | Other | Average |
|
116 |
|------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:|
|
117 |
|
118 |
+
[add tested models here]
|
119 |
+
|
120 |
|
121 |
### Languages
|
122 |
English
|
123 |
+
|
124 |
## Dataset Structure
|
125 |
### Data Instances
|
126 |
An example from code subtask looks as follows:
|
127 |
+
|
128 |
```
|
129 |
{
|
130 |
"question": "The defining idea of Uniswap v3 token is",
|
|
|
132 |
"answer": "A"
|
133 |
}
|
134 |
```
|
135 |
+
|
136 |
### Data Fields
|
137 |
- `question`: a string feature
|
138 |
- `choices`: a list of 4 string features
|
139 |
- `answer`: a ClassLabel feature
|
140 |
+
|
141 |
+
|
142 |
### Data Splits
|
143 |
- `test`: all data under test for benchmarking
|
144 |
|
|
|
146 |
|
147 |
### Curation Rationale
|
148 |
|
149 |
+
This dataset addresses the scarcity of benchmarks designed specifically for Language Models (LMs) in the realm of blockchain technology. With the intersection of blockchain and LM technologies gaining traction, a focused dataset becomes essential. This collection serves as a vital resource for advancing research and understanding within the dynamic blockchain landscape.
|
|
|
150 |
### Source Data
|
151 |
|
152 |
#### Initial Data Collection and Normalization
|