Datasets:
Error when loading the dataset
Just running the load_dataset() reports the following error, how to solve it?
Traceback (most recent call last):
File "/home/workspace/x/repobench/repo_bench.py", line 6, in
dataset = load_dataset("tianyang/repobench_java_v1.1")
File "/root/miniconda3/envs/autocoder/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
File "/root/miniconda3/envs/autocoder/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/root/miniconda3/envs/autocoder/lib/python3.10/site-packages/datasets/builder.py", line 1118, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/root/miniconda3/envs/autocoder/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 101, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='cross_file_first', num_bytes=504528431, num_examples=8033, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='cross_file_first', num_bytes=670976719, num_examples=8722, shard_lengths=[8361, 361], dataset_name='repobench_java_v1.1')}, {'expected': SplitInfo(name='cross_file_random', num_bytes=467242455, num_examples=7618, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='cross_file_random', num_bytes=675950097, num_examples=8705, shard_lengths=[8353, 352], dataset_name='repobench_java_v1.1')}, {'expected': SplitInfo(name='in_file', num_bytes=488999100, num_examples=7910, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='in_file', num_bytes=674755454, num_examples=8696, shard_lengths=[8348, 348], dataset_name='repobench_java_v1.1')}]
Hello there,
I think this is datasets
version error, adding ignore_verificationa=True
should be able solve this.
I'm encountering the same issue when trying to load the tianyang/repobench_java_v1.1 dataset.
I saw the suggestion from the owner to use ignore_verifications=True and tried implementing it like this:
'''
from datasets import load_dataset
dataset = load_dataset("tianyang/repobench_java_v1.1", ignore_verifications=True)
'''
Unfortunately, this didn't resolve the problem for me, and I'm still getting an error. The traceback points to a ValueError indicating that the BuilderConfig doesn't recognize the ignore_verifications key.
'''
Traceback (most recent call last):
File "/data/wxl/graphrag4se/GRACE/utils/download_hf_dataset.py", line 3, in
dataset = load_dataset("tianyang/repobench_java_v1.1",ignore_verifications=True)
File "/data/wxl/miniforge3/envs/se/lib/python3.10/site-packages/datasets/load.py", line 2061, in load_dataset
builder_instance = load_dataset_builder(
File "/data/wxl/miniforge3/envs/se/lib/python3.10/site-packages/datasets/load.py", line 1818, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/data/wxl/miniforge3/envs/se/lib/python3.10/site-packages/datasets/builder.py", line 343, in init
self.config, self.config_id = self._create_builder_config(
File "/data/wxl/miniforge3/envs/se/lib/python3.10/site-packages/datasets/builder.py", line 591, in _create_builder_config
raise ValueError(f"BuilderConfig {builder_config} doesn't have a '{key}' key.")
ValueError: BuilderConfig ParquetConfig(name='default', version=0.0.0, data_dir=None, data_files={'cross_file_first': ['data/cross_file_first-'], 'cross_file_random': ['data/cross_file_random-'], 'in_file': ['data/in_file-*']}, description=None, batch_size=None, columns=None, features=None, filters=None) doesn't have a 'ignore_verifications' key
'''