url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
string | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
float64 | state_reason
string | draft
float64 | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6683
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6683/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6683/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6683/events
|
https://github.com/huggingface/datasets/pull/6683
| 2,142,751,955
|
PR_kwDODunzps5nTxGu
| 6,683
|
Fix imagefolder dataset url
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005851) | 0.003907 / 0.011008 (-0.007101) | 0.063524 / 0.038508 (0.025016) | 0.031773 / 0.023109 (0.008664) | 0.244672 / 0.275898 (-0.031226) | 0.293342 / 0.323480 (-0.030138) | 0.004091 / 0.007986 (-0.003895) | 0.002837 / 0.004328 (-0.001491) | 0.049181 / 0.004250 (0.044930) | 0.044515 / 0.037052 (0.007462) | 0.263932 / 0.258489 (0.005443) | 0.288412 / 0.293841 (-0.005429) | 0.028338 / 0.128546 (-0.100208) | 0.010865 / 0.075646 (-0.064781) | 0.207979 / 0.419271 (-0.211293) | 0.036149 / 0.043533 (-0.007384) | 0.250674 / 0.255139 (-0.004465) | 0.263232 / 0.283200 (-0.019968) | 0.017919 / 0.141683 (-0.123763) | 1.127794 / 1.452155 (-0.324360) | 1.172071 / 1.492716 (-0.320645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090435 / 0.018006 (0.072429) | 0.300041 / 0.000490 (0.299552) | 0.000217 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018986 / 0.037411 (-0.018426) | 0.064872 / 0.014526 (0.050346) | 0.074738 / 0.176557 (-0.101818) | 0.121577 / 0.737135 (-0.615558) | 0.076416 / 0.296338 (-0.219923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279471 / 0.215209 (0.064262) | 2.743066 / 2.077655 (0.665411) | 1.429511 / 1.504120 (-0.074609) | 1.315391 / 1.541195 (-0.225804) | 1.371255 / 1.468490 (-0.097235) | 0.570708 / 4.584777 (-4.014069) | 2.373047 / 3.745712 (-1.372666) | 2.813198 / 5.269862 (-2.456663) | 1.768928 / 4.565676 (-2.796749) | 0.066031 / 0.424275 (-0.358244) | 0.005074 / 0.007607 (-0.002533) | 0.333484 / 0.226044 (0.107440) | 3.295002 / 2.268929 (1.026074) | 1.796089 / 55.444624 (-53.648535) | 1.521849 / 6.876477 (-5.354627) | 1.604417 / 2.142072 (-0.537655) | 0.645235 / 4.805227 (-4.159992) | 0.119226 / 6.500664 (-6.381439) | 0.043275 / 0.075469 (-0.032194) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986350 / 1.841788 (-0.855438) | 11.921886 / 8.074308 (3.847578) | 9.878841 / 10.191392 (-0.312551) | 0.141072 / 0.680424 (-0.539352) | 0.014514 / 0.534201 (-0.519687) | 0.304060 / 0.579283 (-0.275223) | 0.267844 / 0.434364 (-0.166520) | 0.324881 / 0.540337 (-0.215457) | 0.421426 / 1.386936 (-0.965510) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005322 / 0.011353 (-0.006030) | 0.003942 / 0.011008 (-0.007066) | 0.050629 / 0.038508 (0.012121) | 0.031176 / 0.023109 (0.008066) | 0.279627 / 0.275898 (0.003729) | 0.302667 / 0.323480 (-0.020813) | 0.004281 / 0.007986 (-0.003705) | 0.002900 / 0.004328 (-0.001428) | 0.048168 / 0.004250 (0.043918) | 0.046094 / 0.037052 (0.009042) | 0.290714 / 0.258489 (0.032224) | 0.321336 / 0.293841 (0.027496) | 0.047934 / 0.128546 (-0.080612) | 0.010773 / 0.075646 (-0.064873) | 0.059439 / 0.419271 (-0.359832) | 0.033644 / 0.043533 (-0.009889) | 0.273710 / 0.255139 (0.018571) | 0.295144 / 0.283200 (0.011944) | 0.018115 / 0.141683 (-0.123568) | 1.150302 / 1.452155 (-0.301853) | 1.197304 / 1.492716 (-0.295412) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090262 / 0.018006 (0.072255) | 0.300727 / 0.000490 (0.300238) | 0.000228 / 0.000200 (0.000028) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022706 / 0.037411 (-0.014706) | 0.077420 / 0.014526 (0.062894) | 0.089119 / 0.176557 (-0.087437) | 0.126760 / 0.737135 (-0.610375) | 0.090702 / 0.296338 (-0.205637) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296558 / 0.215209 (0.081349) | 2.865311 / 2.077655 (0.787656) | 1.587355 / 1.504120 (0.083235) | 1.491660 / 1.541195 (-0.049534) | 1.513604 / 1.468490 (0.045114) | 0.565209 / 4.584777 (-4.019568) | 2.450648 / 3.745712 (-1.295064) | 2.709941 / 5.269862 (-2.559921) | 1.775032 / 4.565676 (-2.790645) | 0.063767 / 0.424275 (-0.360508) | 0.005047 / 0.007607 (-0.002560) | 0.347406 / 0.226044 (0.121361) | 3.416671 / 2.268929 (1.147743) | 1.949653 / 55.444624 (-53.494971) | 1.669885 / 6.876477 (-5.206592) | 1.848125 / 2.142072 (-0.293947) | 0.648179 / 4.805227 (-4.157048) | 0.116374 / 6.500664 (-6.384290) | 0.041816 / 0.075469 (-0.033653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007009 / 1.841788 (-0.834779) | 12.749964 / 8.074308 (4.675656) | 10.765890 / 10.191392 (0.574498) | 0.141743 / 0.680424 (-0.538681) | 0.016077 / 0.534201 (-0.518124) | 0.293275 / 0.579283 (-0.286008) | 0.277064 / 0.434364 (-0.157300) | 0.327039 / 0.540337 (-0.213299) | 0.421784 / 1.386936 (-0.965152) |\n\n</details>\n</details>\n\n\n"
] | 2024-02-19T16:26:51Z
| 2024-02-19T17:24:25Z
| 2024-02-19T17:18:10Z
|
COLLABORATOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6683/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6683/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6683",
"merged_at": "2024-02-19T17:18:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6683"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4996
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4996/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4996/events
|
https://github.com/huggingface/datasets/issues/4996
| 1,379,345,161
|
I_kwDODunzps5SNyMJ
| 4,996
|
Dataset Viewer issue for Jean-Baptiste/wikiner_fr
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/discussions/3\r\n\r\nI'm closing this."
] | 2022-09-20T12:32:07Z
| 2022-09-27T12:35:44Z
| 2022-09-27T12:35:44Z
|
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
for key, example in self._iter():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
yield from ex_iterable
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples
dataset = Dataset.load_from_disk(filepath)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk
with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file:
FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
```
Is it an error with the dataset script, or the data itself, @huggingface/datasets?
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main
### Owner
No
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4996/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6461
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6461/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6461/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6461/events
|
https://github.com/huggingface/datasets/pull/6461
| 2,018,850,731
|
PR_kwDODunzps5gykvO
| 6,461
|
Fix shard retry mechanism in `push_to_hub`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@Wauplin Maybe `504` should be added to the `retry_on_status_codes` tuple [here](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300) to guard against https://github.com/huggingface/datasets/issues/3872",
"We could but I'm not sure to have witness a 504 on S3 before. The issue reported in https://github.com/huggingface/datasets/issues/3872 is a 504 on the `/upload` endpoint on the Hub and this is not an endpoint that is retried on [this line](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300).",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005110 / 0.011353 (-0.006243) | 0.003307 / 0.011008 (-0.007701) | 0.062601 / 0.038508 (0.024093) | 0.049644 / 0.023109 (0.026534) | 0.243195 / 0.275898 (-0.032703) | 0.273543 / 0.323480 (-0.049936) | 0.003862 / 0.007986 (-0.004123) | 0.002624 / 0.004328 (-0.001705) | 0.048273 / 0.004250 (0.044023) | 0.037820 / 0.037052 (0.000768) | 0.249134 / 0.258489 (-0.009355) | 0.319359 / 0.293841 (0.025518) | 0.027816 / 0.128546 (-0.100730) | 0.010422 / 0.075646 (-0.065225) | 0.206607 / 0.419271 (-0.212665) | 0.035719 / 0.043533 (-0.007814) | 0.250300 / 0.255139 (-0.004839) | 0.290377 / 0.283200 (0.007177) | 0.018459 / 0.141683 (-0.123224) | 1.114664 / 1.452155 (-0.337490) | 1.171429 / 1.492716 (-0.321288) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091483 / 0.018006 (0.073477) | 0.302770 / 0.000490 (0.302281) | 0.000203 / 0.000200 (0.000003) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018870 / 0.037411 (-0.018541) | 0.062692 / 0.014526 (0.048166) | 0.075381 / 0.176557 (-0.101176) | 0.122338 / 0.737135 (-0.614797) | 0.075608 / 0.296338 (-0.220730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288115 / 0.215209 (0.072906) | 2.816183 / 2.077655 (0.738528) | 1.535601 / 1.504120 (0.031481) | 1.409546 / 1.541195 (-0.131648) | 1.438569 / 1.468490 (-0.029921) | 0.561797 / 4.584777 (-4.022980) | 2.373921 / 3.745712 (-1.371791) | 2.739437 / 5.269862 (-2.530424) | 1.750921 / 4.565676 (-2.814755) | 0.062114 / 0.424275 (-0.362161) | 0.004965 / 0.007607 (-0.002642) | 0.348614 / 0.226044 (0.122569) | 3.519631 / 2.268929 (1.250703) | 1.910797 / 55.444624 (-53.533827) | 1.610541 / 6.876477 (-5.265936) | 1.617972 / 2.142072 (-0.524100) | 0.639421 / 4.805227 (-4.165806) | 0.117371 / 6.500664 (-6.383293) | 0.041851 / 0.075469 (-0.033618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945563 / 1.841788 (-0.896224) | 11.362399 / 8.074308 (3.288090) | 10.468468 / 10.191392 (0.277075) | 0.128925 / 0.680424 (-0.551499) | 0.013892 / 0.534201 (-0.520309) | 0.285487 / 0.579283 (-0.293796) | 0.269295 / 0.434364 (-0.165069) | 0.324843 / 0.540337 (-0.215495) | 0.438452 / 1.386936 (-0.948484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003162 / 0.011008 (-0.007846) | 0.048177 / 0.038508 (0.009669) | 0.048708 / 0.023109 (0.025599) | 0.271663 / 0.275898 (-0.004235) | 0.289948 / 0.323480 (-0.033532) | 0.003955 / 0.007986 (-0.004030) | 0.002616 / 0.004328 (-0.001713) | 0.047510 / 0.004250 (0.043260) | 0.039938 / 0.037052 (0.002886) | 0.277449 / 0.258489 (0.018960) | 0.300315 / 0.293841 (0.006474) | 0.029263 / 0.128546 (-0.099283) | 0.010403 / 0.075646 (-0.065244) | 0.056682 / 0.419271 (-0.362590) | 0.032757 / 0.043533 (-0.010776) | 0.273291 / 0.255139 (0.018152) | 0.289023 / 0.283200 (0.005824) | 0.017843 / 0.141683 (-0.123840) | 1.124762 / 1.452155 (-0.327393) | 1.176646 / 1.492716 (-0.316070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004568 / 0.018006 (-0.013438) | 0.300715 / 0.000490 (0.300225) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021528 / 0.037411 (-0.015883) | 0.068317 / 0.014526 (0.053792) | 0.081358 / 0.176557 (-0.095199) | 0.119297 / 0.737135 (-0.617838) | 0.082445 / 0.296338 (-0.213893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289681 / 0.215209 (0.074472) | 2.843862 / 2.077655 (0.766208) | 1.574257 / 1.504120 (0.070137) | 1.454026 / 1.541195 (-0.087169) | 1.478379 / 1.468490 (0.009889) | 0.558259 / 4.584777 (-4.026518) | 2.513261 / 3.745712 (-1.232451) | 2.759751 / 5.269862 (-2.510111) | 1.730335 / 4.565676 (-2.835341) | 0.063805 / 0.424275 (-0.360470) | 0.004991 / 0.007607 (-0.002616) | 0.346586 / 0.226044 (0.120542) | 3.369163 / 2.268929 (1.100234) | 1.934734 / 55.444624 (-53.509890) | 1.658864 / 6.876477 (-5.217613) | 1.645621 / 2.142072 (-0.496452) | 0.636633 / 4.805227 (-4.168594) | 0.116839 / 6.500664 (-6.383825) | 0.040863 / 0.075469 (-0.034606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960925 / 1.841788 (-0.880863) | 11.769189 / 8.074308 (3.694881) | 10.713662 / 10.191392 (0.522270) | 0.140510 / 0.680424 (-0.539914) | 0.015424 / 0.534201 (-0.518777) | 0.288039 / 0.579283 (-0.291244) | 0.277623 / 0.434364 (-0.156741) | 0.322622 / 0.540337 (-0.217716) | 0.539805 / 1.386936 (-0.847131) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005852) | 0.003754 / 0.011008 (-0.007254) | 0.062628 / 0.038508 (0.024120) | 0.059951 / 0.023109 (0.036842) | 0.254851 / 0.275898 (-0.021047) | 0.272133 / 0.323480 (-0.051347) | 0.003962 / 0.007986 (-0.004024) | 0.002759 / 0.004328 (-0.001569) | 0.048412 / 0.004250 (0.044161) | 0.039349 / 0.037052 (0.002297) | 0.253093 / 0.258489 (-0.005397) | 0.287048 / 0.293841 (-0.006793) | 0.027197 / 0.128546 (-0.101349) | 0.010828 / 0.075646 (-0.064819) | 0.206371 / 0.419271 (-0.212901) | 0.035881 / 0.043533 (-0.007652) | 0.254905 / 0.255139 (-0.000234) | 0.273819 / 0.283200 (-0.009381) | 0.018041 / 0.141683 (-0.123642) | 1.103970 / 1.452155 (-0.348185) | 1.166340 / 1.492716 (-0.326377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093196 / 0.018006 (0.075190) | 0.302690 / 0.000490 (0.302200) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019552 / 0.037411 (-0.017860) | 0.062337 / 0.014526 (0.047811) | 0.074070 / 0.176557 (-0.102486) | 0.120998 / 0.737135 (-0.616137) | 0.076265 / 0.296338 (-0.220074) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272637 / 0.215209 (0.057427) | 2.693350 / 2.077655 (0.615696) | 1.398020 / 1.504120 (-0.106100) | 1.285706 / 1.541195 (-0.255488) | 1.342810 / 1.468490 (-0.125680) | 0.565378 / 4.584777 (-4.019399) | 2.390131 / 3.745712 (-1.355581) | 2.892137 / 5.269862 (-2.377725) | 1.819840 / 4.565676 (-2.745836) | 0.062789 / 0.424275 (-0.361486) | 0.004920 / 0.007607 (-0.002687) | 0.329281 / 0.226044 (0.103237) | 3.261664 / 2.268929 (0.992735) | 1.775102 / 55.444624 (-53.669523) | 1.514341 / 6.876477 (-5.362136) | 1.530805 / 2.142072 (-0.611267) | 0.641009 / 4.805227 (-4.164218) | 0.118626 / 6.500664 (-6.382038) | 0.042732 / 0.075469 (-0.032737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933179 / 1.841788 (-0.908609) | 12.085247 / 8.074308 (4.010939) | 10.541596 / 10.191392 (0.350204) | 0.140141 / 0.680424 (-0.540283) | 0.014646 / 0.534201 (-0.519555) | 0.289640 / 0.579283 (-0.289643) | 0.281042 / 0.434364 (-0.153322) | 0.326462 / 0.540337 (-0.213876) | 0.441981 / 1.386936 (-0.944955) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005259 / 0.011353 (-0.006094) | 0.003766 / 0.011008 (-0.007242) | 0.048782 / 0.038508 (0.010273) | 0.064946 / 0.023109 (0.041836) | 0.264529 / 0.275898 (-0.011369) | 0.289675 / 0.323480 (-0.033805) | 0.004057 / 0.007986 (-0.003928) | 0.002805 / 0.004328 (-0.001523) | 0.047709 / 0.004250 (0.043459) | 0.041149 / 0.037052 (0.004096) | 0.271254 / 0.258489 (0.012765) | 0.296685 / 0.293841 (0.002844) | 0.029486 / 0.128546 (-0.099060) | 0.010608 / 0.075646 (-0.065038) | 0.056392 / 0.419271 (-0.362879) | 0.033181 / 0.043533 (-0.010352) | 0.267029 / 0.255139 (0.011890) | 0.284987 / 0.283200 (0.001787) | 0.018045 / 0.141683 (-0.123637) | 1.137358 / 1.452155 (-0.314796) | 1.184007 / 1.492716 (-0.308709) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004603 / 0.018006 (-0.013403) | 0.303901 / 0.000490 (0.303411) | 0.000225 / 0.000200 (0.000025) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021957 / 0.037411 (-0.015454) | 0.069427 / 0.014526 (0.054901) | 0.082394 / 0.176557 (-0.094163) | 0.120745 / 0.737135 (-0.616390) | 0.084571 / 0.296338 (-0.211767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292832 / 0.215209 (0.077623) | 2.824295 / 2.077655 (0.746640) | 1.563273 / 1.504120 (0.059153) | 1.440202 / 1.541195 (-0.100992) | 1.489810 / 1.468490 (0.021320) | 0.561120 / 4.584777 (-4.023657) | 2.439045 / 3.745712 (-1.306667) | 2.867139 / 5.269862 (-2.402722) | 1.793812 / 4.565676 (-2.771865) | 0.062797 / 0.424275 (-0.361478) | 0.005033 / 0.007607 (-0.002574) | 0.343648 / 0.226044 (0.117604) | 3.432285 / 2.268929 (1.163357) | 1.918175 / 55.444624 (-53.526449) | 1.637245 / 6.876477 (-5.239232) | 1.709246 / 2.142072 (-0.432826) | 0.634744 / 4.805227 (-4.170483) | 0.115782 / 6.500664 (-6.384882) | 0.041228 / 0.075469 (-0.034241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962369 / 1.841788 (-0.879418) | 12.750819 / 8.074308 (4.676511) | 10.927356 / 10.191392 (0.735964) | 0.143454 / 0.680424 (-0.536970) | 0.015348 / 0.534201 (-0.518853) | 0.291207 / 0.579283 (-0.288076) | 0.276924 / 0.434364 (-0.157440) | 0.327287 / 0.540337 (-0.213050) | 0.577439 / 1.386936 (-0.809497) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003475 / 0.011008 (-0.007533) | 0.061985 / 0.038508 (0.023477) | 0.048539 / 0.023109 (0.025430) | 0.229935 / 0.275898 (-0.045963) | 0.255247 / 0.323480 (-0.068233) | 0.003919 / 0.007986 (-0.004066) | 0.002664 / 0.004328 (-0.001664) | 0.048892 / 0.004250 (0.044642) | 0.037381 / 0.037052 (0.000328) | 0.238517 / 0.258489 (-0.019972) | 0.284069 / 0.293841 (-0.009772) | 0.027513 / 0.128546 (-0.101033) | 0.010778 / 0.075646 (-0.064868) | 0.205004 / 0.419271 (-0.214268) | 0.035553 / 0.043533 (-0.007980) | 0.230117 / 0.255139 (-0.025022) | 0.251150 / 0.283200 (-0.032050) | 0.017951 / 0.141683 (-0.123732) | 1.145548 / 1.452155 (-0.306607) | 1.191659 / 1.492716 (-0.301057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092335 / 0.018006 (0.074329) | 0.300264 / 0.000490 (0.299774) | 0.000206 / 0.000200 (0.000006) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018608 / 0.037411 (-0.018804) | 0.060376 / 0.014526 (0.045850) | 0.073551 / 0.176557 (-0.103006) | 0.118840 / 0.737135 (-0.618295) | 0.074447 / 0.296338 (-0.221892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287033 / 0.215209 (0.071824) | 2.770958 / 2.077655 (0.693303) | 1.443986 / 1.504120 (-0.060134) | 1.314627 / 1.541195 (-0.226567) | 1.342287 / 1.468490 (-0.126203) | 0.559607 / 4.584777 (-4.025170) | 2.409678 / 3.745712 (-1.336034) | 2.772566 / 5.269862 (-2.497295) | 1.743511 / 4.565676 (-2.822165) | 0.062277 / 0.424275 (-0.361998) | 0.004952 / 0.007607 (-0.002655) | 0.330581 / 0.226044 (0.104537) | 3.280385 / 2.268929 (1.011456) | 1.809599 / 55.444624 (-53.635025) | 1.532186 / 6.876477 (-5.344290) | 1.529689 / 2.142072 (-0.612383) | 0.645213 / 4.805227 (-4.160014) | 0.117564 / 6.500664 (-6.383100) | 0.041657 / 0.075469 (-0.033812) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943912 / 1.841788 (-0.897876) | 11.414317 / 8.074308 (3.340009) | 10.394915 / 10.191392 (0.203523) | 0.129271 / 0.680424 (-0.551153) | 0.013934 / 0.534201 (-0.520267) | 0.288217 / 0.579283 (-0.291066) | 0.267171 / 0.434364 (-0.167193) | 0.327112 / 0.540337 (-0.213225) | 0.446680 / 1.386936 (-0.940256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005200 / 0.011353 (-0.006152) | 0.003453 / 0.011008 (-0.007555) | 0.048736 / 0.038508 (0.010228) | 0.051073 / 0.023109 (0.027964) | 0.276591 / 0.275898 (0.000693) | 0.294495 / 0.323480 (-0.028985) | 0.004069 / 0.007986 (-0.003917) | 0.002945 / 0.004328 (-0.001383) | 0.047090 / 0.004250 (0.042839) | 0.040445 / 0.037052 (0.003393) | 0.278464 / 0.258489 (0.019975) | 0.304020 / 0.293841 (0.010179) | 0.028811 / 0.128546 (-0.099736) | 0.010388 / 0.075646 (-0.065259) | 0.057214 / 0.419271 (-0.362057) | 0.032588 / 0.043533 (-0.010945) | 0.277694 / 0.255139 (0.022555) | 0.294979 / 0.283200 (0.011779) | 0.018384 / 0.141683 (-0.123299) | 1.162332 / 1.452155 (-0.289822) | 1.188355 / 1.492716 (-0.304361) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090501 / 0.018006 (0.072495) | 0.303122 / 0.000490 (0.302632) | 0.000222 / 0.000200 (0.000022) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022536 / 0.037411 (-0.014876) | 0.068452 / 0.014526 (0.053926) | 0.080932 / 0.176557 (-0.095625) | 0.119185 / 0.737135 (-0.617950) | 0.081513 / 0.296338 (-0.214825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291522 / 0.215209 (0.076313) | 2.849467 / 2.077655 (0.771812) | 1.597395 / 1.504120 (0.093275) | 1.512872 / 1.541195 (-0.028323) | 1.488144 / 1.468490 (0.019654) | 0.572436 / 4.584777 (-4.012341) | 2.440129 / 3.745712 (-1.305583) | 2.788045 / 5.269862 (-2.481817) | 1.754246 / 4.565676 (-2.811430) | 0.066706 / 0.424275 (-0.357569) | 0.005035 / 0.007607 (-0.002573) | 0.336621 / 0.226044 (0.110576) | 3.322820 / 2.268929 (1.053891) | 1.940494 / 55.444624 (-53.504130) | 1.670022 / 6.876477 (-5.206454) | 1.666353 / 2.142072 (-0.475720) | 0.646180 / 4.805227 (-4.159047) | 0.116676 / 6.500664 (-6.383988) | 0.040559 / 0.075469 (-0.034910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971396 / 1.841788 (-0.870392) | 11.782426 / 8.074308 (3.708118) | 10.672034 / 10.191392 (0.480642) | 0.137658 / 0.680424 (-0.542766) | 0.016210 / 0.534201 (-0.517991) | 0.288302 / 0.579283 (-0.290981) | 0.280775 / 0.434364 (-0.153589) | 0.326962 / 0.540337 (-0.213375) | 0.558511 / 1.386936 (-0.828425) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-30T14:57:14Z
| 2023-12-01T17:57:39Z
| 2023-12-01T17:51:33Z
|
COLLABORATOR
| null | null | null |
When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that.
Fix https://github.com/huggingface/datasets/issues/6392
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6461/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6461/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6461.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6461",
"merged_at": "2023-12-01T17:51:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6461.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6461"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7451
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7451/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7451/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7451/events
|
https://github.com/huggingface/datasets/pull/7451
| 2,919,835,663
|
PR_kwDODunzps6OpwDz
| 7,451
|
Fix resuming after `ds.set_epoch(new_epoch)`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7451). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-03-14T10:31:25Z
| 2025-03-14T10:50:11Z
| 2025-03-14T10:50:09Z
|
MEMBER
| null | null | null |
close https://github.com/huggingface/datasets/issues/7447
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7451/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7451/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7451.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7451",
"merged_at": "2025-03-14T10:50:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7451.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7451"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6236
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6236/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6236/events
|
https://github.com/huggingface/datasets/issues/6236
| 1,893,648,480
|
I_kwDODunzps5w3shg
| 6,236
|
Support buffer shuffle for to_tf_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4",
"events_url": "https://api.github.com/users/EthanRock/events{/privacy}",
"followers_url": "https://api.github.com/users/EthanRock/followers",
"following_url": "https://api.github.com/users/EthanRock/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanRock/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EthanRock",
"id": 7635551,
"login": "EthanRock",
"node_id": "MDQ6VXNlcjc2MzU1NTE=",
"organizations_url": "https://api.github.com/users/EthanRock/orgs",
"received_events_url": "https://api.github.com/users/EthanRock/received_events",
"repos_url": "https://api.github.com/users/EthanRock/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EthanRock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanRock/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EthanRock",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"cc @Rocketknight1 ",
"Hey! You can implement this yourself, just:\r\n\r\n1) Create the dataset with `to_tf_dataset()` with `shuffle=False`\r\n2) Add an `unbatch()` at the end (or use batch_size=1)\r\n3) Add a `shuffle()` to the resulting dataset with your desired buffer size\r\n4) Add a `batch()` at the end again to re-batch your dataset.\r\n\r\nNote that the way we construct datasets in `to_tf_dataset()`, we don't actually shuffle the entire dataset in-memory, using `tf.data.Dataset.shuffle()`! Instead, we shuffle an index array and then load from the dataset with that. This means that shuffling with `tf.data.Dataset.shuffle()` will probably be slower and use more memory than our approach - I don't think adding the option for smaller shuffle buffers will actually save you memory on this!",
"Thanks for your reply! @Rocketknight1 \r\n\"We don't actually shuffle the entire dataset in-memory, using tf.data.Dataset.shuffle()! Instead, we shuffle an index array and then load from the dataset with that.\"\r\nIn such case, there will be random access to dataset data during shuffling. When the dataset is large, the performance can be X10 times slow. I have tried many ways with to_tf_dataset() trying to achieve comparable performance with tf.data.Dataset().shuffle(buffer_size).batch(). But the performance with to_tf_dataset() is still slow. \r\n"
] | 2023-09-13T03:19:44Z
| 2023-09-18T01:11:21Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model.
Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset.
tf.data.Dataset support buffer shuffle by default.
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
### Motivation
I'm very frustrated to find the loading with shuffling large dataset is very slow. It seems impossible to shuffle before training Keras with big dataset.
### Your contribution
NA
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6236/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6804
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6804/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6804/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6804/events
|
https://github.com/huggingface/datasets/pull/6804
| 2,238,035,124
|
PR_kwDODunzps5sYJFF
| 6,804
|
Fix --repo-type order in cli upload docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6804). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005222 / 0.011353 (-0.006131) | 0.003306 / 0.011008 (-0.007702) | 0.063326 / 0.038508 (0.024818) | 0.031371 / 0.023109 (0.008261) | 0.244947 / 0.275898 (-0.030951) | 0.264141 / 0.323480 (-0.059339) | 0.004186 / 0.007986 (-0.003800) | 0.002676 / 0.004328 (-0.001653) | 0.048690 / 0.004250 (0.044440) | 0.045172 / 0.037052 (0.008120) | 0.256597 / 0.258489 (-0.001892) | 0.284348 / 0.293841 (-0.009493) | 0.026855 / 0.128546 (-0.101691) | 0.009947 / 0.075646 (-0.065699) | 0.206311 / 0.419271 (-0.212961) | 0.035178 / 0.043533 (-0.008355) | 0.251501 / 0.255139 (-0.003638) | 0.261314 / 0.283200 (-0.021886) | 0.018000 / 0.141683 (-0.123683) | 1.144588 / 1.452155 (-0.307566) | 1.193627 / 1.492716 (-0.299089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091629 / 0.018006 (0.073623) | 0.298959 / 0.000490 (0.298469) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018053 / 0.037411 (-0.019358) | 0.061280 / 0.014526 (0.046754) | 0.074138 / 0.176557 (-0.102419) | 0.119048 / 0.737135 (-0.618088) | 0.074572 / 0.296338 (-0.221767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282440 / 0.215209 (0.067231) | 2.762017 / 2.077655 (0.684362) | 1.474452 / 1.504120 (-0.029668) | 1.361489 / 1.541195 (-0.179706) | 1.359696 / 1.468490 (-0.108795) | 0.569640 / 4.584777 (-4.015137) | 2.398098 / 3.745712 (-1.347614) | 2.731399 / 5.269862 (-2.538462) | 1.697432 / 4.565676 (-2.868245) | 0.063330 / 0.424275 (-0.360945) | 0.005416 / 0.007607 (-0.002191) | 0.346510 / 0.226044 (0.120465) | 3.276473 / 2.268929 (1.007544) | 1.837605 / 55.444624 (-53.607019) | 1.538654 / 6.876477 (-5.337822) | 1.553943 / 2.142072 (-0.588129) | 0.640571 / 4.805227 (-4.164657) | 0.116736 / 6.500664 (-6.383928) | 0.041701 / 0.075469 (-0.033768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975846 / 1.841788 (-0.865942) | 11.151727 / 8.074308 (3.077419) | 9.436281 / 10.191392 (-0.755111) | 0.141027 / 0.680424 (-0.539397) | 0.014389 / 0.534201 (-0.519812) | 0.285575 / 0.579283 (-0.293708) | 0.263753 / 0.434364 (-0.170610) | 0.321893 / 0.540337 (-0.218444) | 0.420280 / 1.386936 (-0.966656) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005148 / 0.011353 (-0.006205) | 0.003264 / 0.011008 (-0.007744) | 0.049828 / 0.038508 (0.011320) | 0.031234 / 0.023109 (0.008125) | 0.271079 / 0.275898 (-0.004819) | 0.295256 / 0.323480 (-0.028224) | 0.004128 / 0.007986 (-0.003857) | 0.002637 / 0.004328 (-0.001692) | 0.048145 / 0.004250 (0.043895) | 0.039691 / 0.037052 (0.002638) | 0.287229 / 0.258489 (0.028740) | 0.310477 / 0.293841 (0.016636) | 0.028936 / 0.128546 (-0.099610) | 0.010392 / 0.075646 (-0.065254) | 0.057774 / 0.419271 (-0.361497) | 0.032557 / 0.043533 (-0.010975) | 0.275146 / 0.255139 (0.020007) | 0.291283 / 0.283200 (0.008084) | 0.017724 / 0.141683 (-0.123958) | 1.186831 / 1.452155 (-0.265324) | 1.220086 / 1.492716 (-0.272630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093575 / 0.018006 (0.075569) | 0.297198 / 0.000490 (0.296709) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021683 / 0.037411 (-0.015728) | 0.075347 / 0.014526 (0.060821) | 0.085453 / 0.176557 (-0.091103) | 0.125422 / 0.737135 (-0.611713) | 0.087185 / 0.296338 (-0.209153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301520 / 0.215209 (0.086311) | 2.951614 / 2.077655 (0.873959) | 1.659897 / 1.504120 (0.155777) | 1.528097 / 1.541195 (-0.013097) | 1.552031 / 1.468490 (0.083541) | 0.576297 / 4.584777 (-4.008480) | 2.492349 / 3.745712 (-1.253363) | 2.805999 / 5.269862 (-2.463862) | 1.757556 / 4.565676 (-2.808121) | 0.064940 / 0.424275 (-0.359335) | 0.005314 / 0.007607 (-0.002293) | 0.358838 / 0.226044 (0.132793) | 3.576890 / 2.268929 (1.307961) | 2.030788 / 55.444624 (-53.413837) | 1.743650 / 6.876477 (-5.132826) | 1.745229 / 2.142072 (-0.396844) | 0.647840 / 4.805227 (-4.157387) | 0.116637 / 6.500664 (-6.384027) | 0.040555 / 0.075469 (-0.034915) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009130 / 1.841788 (-0.832657) | 11.951145 / 8.074308 (3.876836) | 9.968355 / 10.191392 (-0.223037) | 0.139959 / 0.680424 (-0.540465) | 0.015985 / 0.534201 (-0.518216) | 0.286594 / 0.579283 (-0.292689) | 0.275805 / 0.434364 (-0.158559) | 0.328484 / 0.540337 (-0.211854) | 0.419818 / 1.386936 (-0.967118) |\n\n</details>\n</details>\n\n\n"
] | 2024-04-11T15:39:09Z
| 2024-04-11T16:24:57Z
| 2024-04-11T16:18:47Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6804/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6804/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6804.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6804",
"merged_at": "2024-04-11T16:18:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6804.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6804"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5708
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5708/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5708/events
|
https://github.com/huggingface/datasets/issues/5708
| 1,655,023,642
|
I_kwDODunzps5ipaga
| 5,708
|
Dataset sizes are in MiB instead of MB in dataset cards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5",
"looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`",
"I am only looping trough the dataset cards, assuming that all of them were created with MiB.\r\n\r\nI agree we should only run the bulk edit once for all canonical datasets: I'm using a for-loop over canonical datasets.",
"yes, worst case, we have this in structured data:\r\n\r\n<img width=\"337\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230037051-06caddcb-08c8-4953-a710-f3d122917db3.png\">\r\n",
"I have just included as well the conversion from MB to GB if necessary. See: \r\n- https://huggingface.co/datasets/bookcorpus/discussions/2/files\r\n- https://huggingface.co/datasets/asnq/discussions/2/files",
"Nice. Is it another loop? Because in https://huggingface.co/datasets/amazon_us_reviews/discussions/2/files we have `32377.29 MB` for example",
"First, I tested some batches to check the changes made. Then I incorporated the MB to GB conversion. Now I'm running the rest.",
"The bulk edit parsed 751 canonical datasets and updated 166.",
"Thanks a lot!\r\n\r\nThe sizes now match as expected!\r\n\r\n<img width=\"1446\" alt=\"Capture d’écran 2023-04-05 à 16 10 15\" src=\"https://user-images.githubusercontent.com/1676121/230107044-ac2a76ea-a4fe-4e81-a925-f464b85f5edd.png\">\r\n",
"I made another bulk edit of ancient canonical datasets that were moved to community organization. I have parsed 11 datasets and opened a PR on 3 of them:\r\n- [x] \"allenai/scicite\": https://huggingface.co/datasets/allenai/scicite/discussions/3\r\n- [x] \"allenai/scifact\": https://huggingface.co/datasets/allenai/scifact/discussions/2\r\n- [x] \"dair-ai/emotion\": https://huggingface.co/datasets/dair-ai/emotion/discussions/6",
"should we force merge the PR and close this issue?",
"I merged the PRs for \"scicite\" and \"scifact\"."
] | 2023-04-05T06:36:03Z
| 2023-12-21T10:20:28Z
| 2023-12-21T10:20:27Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932)
<img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png">
TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:`
- [x] Bulk edit on the Hub to fix this in all canonical datasets
- [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5708/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6731
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6731/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6731/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6731/events
|
https://github.com/huggingface/datasets/issues/6731
| 2,182,844,673
|
I_kwDODunzps6CG5EB
| 6,731
|
Unexpected behavior when using load_dataset with streaming=True in a for loop
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42908296?v=4",
"events_url": "https://api.github.com/users/uApiv/events{/privacy}",
"followers_url": "https://api.github.com/users/uApiv/followers",
"following_url": "https://api.github.com/users/uApiv/following{/other_user}",
"gists_url": "https://api.github.com/users/uApiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/uApiv",
"id": 42908296,
"login": "uApiv",
"node_id": "MDQ6VXNlcjQyOTA4Mjk2",
"organizations_url": "https://api.github.com/users/uApiv/orgs",
"received_events_url": "https://api.github.com/users/uApiv/received_events",
"repos_url": "https://api.github.com/users/uApiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/uApiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uApiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/uApiv",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This is normal behavior in python when using `lambda`: the `i` defined in your `lambda` refers to the global variable `i` in your loop, and `i` equals to `1` when you run your `for e in res[0]` line.\r\n\r\nYou should pass `fn_kwargs` that will be passed to your `lambda` instead of using the global variable:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nres=[]\r\nfor i in [0,1]:\r\n di = load_dataset(\r\n \"json\", \r\n data_files='path_to.json', \r\n split='train',\r\n streaming=True, \r\n ).map(lambda x, source: {\"source\": source}, fn_kwargs={\"source\": i})\r\n\r\n res.append(di)\r\n\r\nfor e in res[0]:\r\n print(e)\r\n```\r\n\r\nThis doesn't happen in non-streaming since in that case `map` is executed while the variable `i` has the right value. In streaming mode, `map` is executed on-the-fly when you iterate on the dataset.",
"Thank you very much for your answer. I think this issue can be closed now."
] | 2024-03-12T23:26:43Z
| 2024-04-16T00:00:00Z
| 2024-04-16T00:00:00Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
### My Code
```
from datasets import load_dataset
res=[]
for i in [0,1]:
di=load_dataset(
"json",
data_files='path_to.json',
split='train',
streaming=True,
).map(lambda x: {"source": i})
res.append(di)
for e in res[0]:
print(e)
```
### Unexpected Behavior
Data in `res[0]` has `source=1`. However the expected value is 0.
### FYI
I further switch `streaming` to `False`. And the output value is as expected (0). So there may exist bugs in setting `streaming=True` in a for loop.
### Environment
Python 3.8.0
datasets==2.18.0
transformers==4.28.1
### Steps to reproduce the bug
1. Create a Json file with any content.
2. Run the provided code.
3. Switch `streaming` to `False` and run again to see the expected behavior.
### Expected behavior
The expected behavior is the data are mapped with its corresponding value in the for loop.
### Environment info
Python 3.8.0
datasets==2.18.0
transformers==4.28.1
Ubuntu 20.04
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42908296?v=4",
"events_url": "https://api.github.com/users/uApiv/events{/privacy}",
"followers_url": "https://api.github.com/users/uApiv/followers",
"following_url": "https://api.github.com/users/uApiv/following{/other_user}",
"gists_url": "https://api.github.com/users/uApiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/uApiv",
"id": 42908296,
"login": "uApiv",
"node_id": "MDQ6VXNlcjQyOTA4Mjk2",
"organizations_url": "https://api.github.com/users/uApiv/orgs",
"received_events_url": "https://api.github.com/users/uApiv/received_events",
"repos_url": "https://api.github.com/users/uApiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/uApiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uApiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/uApiv",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6731/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6731/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7210
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7210/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7210/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7210/events
|
https://github.com/huggingface/datasets/issues/7210
| 2,575,883,939
|
I_kwDODunzps6ZiN6j
| 7,210
|
Convert Array features to numpy arrays rather than lists by default
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-10-09T13:05:21Z
| 2024-10-09T13:05:21Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
It is currently quite easy to cause massive slowdowns when using datasets and not familiar with the underlying data conversions by e.g. making bad choices of formatting.
Would it be more user-friendly to set defaults that avoid this as much as possible? e.g. format Array features as numpy arrays rather than python lists
### Motivation
Default array formatting leads to slow performance: e.g.
```python
import numpy as np
from datasets import Dataset, Features, Array3D
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
```
```python
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~1.4 s
```python
ds = dataset.to_iterable_dataset()
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~10s
```python
ds = dataset.with_format("numpy")
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~0.04s
```python
ds = dataset.to_iterable_dataset().with_format("numpy")
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~0.04s
### Your contribution
May be able to contribute
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7210/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7210/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4957
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4957/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4957/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4957/events
|
https://github.com/huggingface/datasets/pull/4957
| 1,366,532,849
|
PR_kwDODunzps4-nGIk
| 4,957
|
Add `Dataset.from_generator`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I restarted the builder PR job just in case",
"_The documentation is not available anymore as the PR was closed or merged._",
"CI is now green. https://github.com/huggingface/doc-builder/pull/296 explains why it failed."
] | 2022-09-08T15:08:25Z
| 2022-09-16T14:46:35Z
| 2022-09-16T14:44:18Z
|
COLLABORATOR
| null | null | null |
Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism.
Closes https://github.com/huggingface/datasets/issues/4417
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4957/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4957/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4957.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4957",
"merged_at": "2022-09-16T14:44:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4957.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4957"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6890
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6890/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6890/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6890/events
|
https://github.com/huggingface/datasets/issues/6890
| 2,288,699,041
|
I_kwDODunzps6Iasah
| 6,890
|
add `with_transform` and/or `set_transform` to IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/not-lain",
"id": 70411813,
"login": "not-lain",
"node_id": "MDQ6VXNlcjcwNDExODEz",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"repos_url": "https://api.github.com/users/not-lain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/not-lain",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-05-10T01:00:12Z
| 2024-05-10T01:00:46Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map
### Motivation
don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class.
reducing time and resources
### Your contribution
I am a little busy with my job search lately, but would post about this feature in my social media.
Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard
/ (┬┬﹏┬┬)\
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6890/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6890/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4628
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4628/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4628/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4628/events
|
https://github.com/huggingface/datasets/pull/4628
| 1,293,361,308
|
PR_kwDODunzps46zvFJ
| 4,628
|
Fix time type `_arrow_to_datasets_dtype` conversion
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-04T16:20:15Z
| 2022-07-07T14:08:38Z
| 2022-07-07T13:57:12Z
|
COLLABORATOR
| null | null | null |
Fix #4620
The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(type))` to convert them both to the `Time64Type(time64[unit])` format.
cc @severo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4628/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4628/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4628",
"merged_at": "2022-07-07T13:57:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4628"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5263
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5263/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5263/events
|
https://github.com/huggingface/datasets/issues/5263
| 1,455,252,626
|
I_kwDODunzps5WvWSS
| 5,263
|
Save a dataset in a determined number of shards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[] | 2022-11-18T14:43:54Z
| 2022-12-14T18:22:59Z
| 2022-12-14T18:22:59Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
This is useful to distribute the shards to training nodes.
This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5263/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5263/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6301
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6301/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6301/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6301/events
|
https://github.com/huggingface/datasets/pull/6301
| 1,940,183,999
|
PR_kwDODunzps5cpPVh
| 6,301
|
Unpin `tensorflow` maximum version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006663 / 0.011353 (-0.004690) | 0.004091 / 0.011008 (-0.006918) | 0.084954 / 0.038508 (0.046445) | 0.071869 / 0.023109 (0.048760) | 0.314706 / 0.275898 (0.038808) | 0.352794 / 0.323480 (0.029314) | 0.004027 / 0.007986 (-0.003959) | 0.003371 / 0.004328 (-0.000957) | 0.065456 / 0.004250 (0.061205) | 0.055828 / 0.037052 (0.018775) | 0.316502 / 0.258489 (0.058013) | 0.377979 / 0.293841 (0.084138) | 0.030870 / 0.128546 (-0.097676) | 0.008616 / 0.075646 (-0.067030) | 0.288625 / 0.419271 (-0.130646) | 0.052314 / 0.043533 (0.008781) | 0.322725 / 0.255139 (0.067586) | 0.351810 / 0.283200 (0.068611) | 0.025726 / 0.141683 (-0.115957) | 1.439308 / 1.452155 (-0.012847) | 1.524484 / 1.492716 (0.031768) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235212 / 0.018006 (0.217206) | 0.444926 / 0.000490 (0.444437) | 0.009887 / 0.000200 (0.009687) | 0.000402 / 0.000054 (0.000347) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028956 / 0.037411 (-0.008455) | 0.084401 / 0.014526 (0.069875) | 0.339686 / 0.176557 (0.163130) | 0.186785 / 0.737135 (-0.550350) | 0.195017 / 0.296338 (-0.101322) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405480 / 0.215209 (0.190271) | 4.024315 / 2.077655 (1.946661) | 2.056398 / 1.504120 (0.552278) | 1.912099 / 1.541195 (0.370904) | 1.950119 / 1.468490 (0.481629) | 0.486071 / 4.584777 (-4.098706) | 3.578501 / 3.745712 (-0.167211) | 3.268980 / 5.269862 (-2.000881) | 2.018114 / 4.565676 (-2.547563) | 0.057440 / 0.424275 (-0.366835) | 0.007281 / 0.007607 (-0.000326) | 0.474760 / 0.226044 (0.248716) | 4.746908 / 2.268929 (2.477979) | 2.550111 / 55.444624 (-52.894513) | 2.171932 / 6.876477 (-4.704544) | 2.392235 / 2.142072 (0.250162) | 0.585940 / 4.805227 (-4.219287) | 0.136445 / 6.500664 (-6.364219) | 0.062125 / 0.075469 (-0.013344) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270763 / 1.841788 (-0.571025) | 19.213516 / 8.074308 (11.139208) | 13.992620 / 10.191392 (3.801228) | 0.167356 / 0.680424 (-0.513068) | 0.018261 / 0.534201 (-0.515940) | 0.392489 / 0.579283 (-0.186794) | 0.418845 / 0.434364 (-0.015519) | 0.461824 / 0.540337 (-0.078513) | 0.649661 / 1.386936 (-0.737275) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006675 / 0.011353 (-0.004678) | 0.003913 / 0.011008 (-0.007096) | 0.064943 / 0.038508 (0.026435) | 0.072426 / 0.023109 (0.049317) | 0.400785 / 0.275898 (0.124887) | 0.434359 / 0.323480 (0.110879) | 0.005370 / 0.007986 (-0.002616) | 0.003290 / 0.004328 (-0.001038) | 0.065035 / 0.004250 (0.060785) | 0.054924 / 0.037052 (0.017872) | 0.404442 / 0.258489 (0.145953) | 0.439027 / 0.293841 (0.145186) | 0.032467 / 0.128546 (-0.096080) | 0.008565 / 0.075646 (-0.067081) | 0.070653 / 0.419271 (-0.348619) | 0.048034 / 0.043533 (0.004501) | 0.400869 / 0.255139 (0.145730) | 0.423048 / 0.283200 (0.139848) | 0.022757 / 0.141683 (-0.118926) | 1.516956 / 1.452155 (0.064801) | 1.581599 / 1.492716 (0.088883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214761 / 0.018006 (0.196755) | 0.440921 / 0.000490 (0.440431) | 0.007538 / 0.000200 (0.007338) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032313 / 0.037411 (-0.005099) | 0.091365 / 0.014526 (0.076839) | 0.106665 / 0.176557 (-0.069891) | 0.158637 / 0.737135 (-0.578498) | 0.104894 / 0.296338 (-0.191445) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432995 / 0.215209 (0.217786) | 4.339911 / 2.077655 (2.262256) | 2.313139 / 1.504120 (0.809019) | 2.142552 / 1.541195 (0.601357) | 2.279275 / 1.468490 (0.810785) | 0.501133 / 4.584777 (-4.083644) | 3.696160 / 3.745712 (-0.049552) | 3.341886 / 5.269862 (-1.927976) | 2.105972 / 4.565676 (-2.459705) | 0.059268 / 0.424275 (-0.365008) | 0.007568 / 0.007607 (-0.000039) | 0.512546 / 0.226044 (0.286502) | 5.130219 / 2.268929 (2.861290) | 2.808292 / 55.444624 (-52.636332) | 2.478721 / 6.876477 (-4.397755) | 2.679341 / 2.142072 (0.537269) | 0.599022 / 4.805227 (-4.206206) | 0.143761 / 6.500664 (-6.356903) | 0.062061 / 0.075469 (-0.013409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430507 / 1.841788 (-0.411281) | 20.458085 / 8.074308 (12.383777) | 15.268356 / 10.191392 (5.076964) | 0.163359 / 0.680424 (-0.517065) | 0.020908 / 0.534201 (-0.513293) | 0.396870 / 0.579283 (-0.182413) | 0.432630 / 0.434364 (-0.001733) | 0.475909 / 0.540337 (-0.064429) | 0.681031 / 1.386936 (-0.705905) |\n\n</details>\n</details>\n\n\n",
"CI failures are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005815 / 0.011353 (-0.005538) | 0.003419 / 0.011008 (-0.007589) | 0.080286 / 0.038508 (0.041778) | 0.056487 / 0.023109 (0.033377) | 0.304414 / 0.275898 (0.028516) | 0.341039 / 0.323480 (0.017559) | 0.004392 / 0.007986 (-0.003594) | 0.002852 / 0.004328 (-0.001477) | 0.062339 / 0.004250 (0.058089) | 0.044683 / 0.037052 (0.007630) | 0.311651 / 0.258489 (0.053162) | 0.357249 / 0.293841 (0.063409) | 0.027300 / 0.128546 (-0.101246) | 0.007963 / 0.075646 (-0.067683) | 0.261948 / 0.419271 (-0.157323) | 0.044952 / 0.043533 (0.001419) | 0.309990 / 0.255139 (0.054851) | 0.340735 / 0.283200 (0.057536) | 0.020786 / 0.141683 (-0.120897) | 1.471378 / 1.452155 (0.019224) | 1.517260 / 1.492716 (0.024543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245447 / 0.018006 (0.227441) | 0.418967 / 0.000490 (0.418477) | 0.007039 / 0.000200 (0.006840) | 0.000196 / 0.000054 (0.000142) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022880 / 0.037411 (-0.014532) | 0.071862 / 0.014526 (0.057337) | 0.083009 / 0.176557 (-0.093547) | 0.143414 / 0.737135 (-0.593722) | 0.082896 / 0.296338 (-0.213442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390645 / 0.215209 (0.175436) | 3.888104 / 2.077655 (1.810450) | 1.859572 / 1.504120 (0.355452) | 1.683803 / 1.541195 (0.142608) | 1.697902 / 1.468490 (0.229412) | 0.499537 / 4.584777 (-4.085239) | 3.015832 / 3.745712 (-0.729881) | 2.805696 / 5.269862 (-2.464166) | 1.830408 / 4.565676 (-2.735268) | 0.058191 / 0.424275 (-0.366085) | 0.006357 / 0.007607 (-0.001250) | 0.462486 / 0.226044 (0.236442) | 4.634951 / 2.268929 (2.366022) | 2.309364 / 55.444624 (-53.135260) | 1.979521 / 6.876477 (-4.896956) | 2.080011 / 2.142072 (-0.062062) | 0.593086 / 4.805227 (-4.212141) | 0.124856 / 6.500664 (-6.375808) | 0.060172 / 0.075469 (-0.015297) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251439 / 1.841788 (-0.590349) | 17.068999 / 8.074308 (8.994691) | 13.527209 / 10.191392 (3.335817) | 0.146636 / 0.680424 (-0.533788) | 0.016866 / 0.534201 (-0.517335) | 0.333202 / 0.579283 (-0.246081) | 0.360444 / 0.434364 (-0.073920) | 0.388378 / 0.540337 (-0.151959) | 0.530519 / 1.386936 (-0.856417) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006043 / 0.011353 (-0.005310) | 0.003612 / 0.011008 (-0.007396) | 0.062644 / 0.038508 (0.024135) | 0.056104 / 0.023109 (0.032995) | 0.446328 / 0.275898 (0.170430) | 0.478044 / 0.323480 (0.154564) | 0.004641 / 0.007986 (-0.003345) | 0.002896 / 0.004328 (-0.001432) | 0.062344 / 0.004250 (0.058093) | 0.046339 / 0.037052 (0.009287) | 0.454866 / 0.258489 (0.196377) | 0.484242 / 0.293841 (0.190401) | 0.028602 / 0.128546 (-0.099944) | 0.008075 / 0.075646 (-0.067571) | 0.067980 / 0.419271 (-0.351291) | 0.041339 / 0.043533 (-0.002194) | 0.452911 / 0.255139 (0.197772) | 0.474180 / 0.283200 (0.190981) | 0.019395 / 0.141683 (-0.122288) | 1.432161 / 1.452155 (-0.019993) | 1.505800 / 1.492716 (0.013083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216983 / 0.018006 (0.198977) | 0.406232 / 0.000490 (0.405743) | 0.005101 / 0.000200 (0.004902) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026295 / 0.037411 (-0.011116) | 0.080490 / 0.014526 (0.065964) | 0.088105 / 0.176557 (-0.088451) | 0.143294 / 0.737135 (-0.593841) | 0.089125 / 0.296338 (-0.207213) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465512 / 0.215209 (0.250302) | 4.648656 / 2.077655 (2.571002) | 2.598225 / 1.504120 (1.094105) | 2.409588 / 1.541195 (0.868393) | 2.513745 / 1.468490 (1.045255) | 0.507425 / 4.584777 (-4.077352) | 3.130164 / 3.745712 (-0.615548) | 2.836817 / 5.269862 (-2.433045) | 1.836029 / 4.565676 (-2.729647) | 0.058829 / 0.424275 (-0.365446) | 0.006551 / 0.007607 (-0.001056) | 0.537892 / 0.226044 (0.311848) | 5.401079 / 2.268929 (3.132150) | 3.019817 / 55.444624 (-52.424807) | 2.695131 / 6.876477 (-4.181346) | 2.805321 / 2.142072 (0.663248) | 0.595681 / 4.805227 (-4.209546) | 0.124368 / 6.500664 (-6.376296) | 0.060712 / 0.075469 (-0.014757) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.361508 / 1.841788 (-0.480279) | 17.811373 / 8.074308 (9.737065) | 14.482705 / 10.191392 (4.291313) | 0.153193 / 0.680424 (-0.527231) | 0.018347 / 0.534201 (-0.515854) | 0.330900 / 0.579283 (-0.248383) | 0.374948 / 0.434364 (-0.059416) | 0.385615 / 0.540337 (-0.154722) | 0.568077 / 1.386936 (-0.818859) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-12T14:58:07Z
| 2023-10-12T15:58:20Z
| 2023-10-12T15:49:54Z
|
COLLABORATOR
| null | null | null |
Removes the temporary pin introduced in #6264
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6301/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6301/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6301",
"merged_at": "2023-10-12T15:49:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6301"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6579
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6579/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6579/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6579/events
|
https://github.com/huggingface/datasets/issues/6579
| 2,075,407,473
|
I_kwDODunzps57tDRx
| 6,579
|
Unable to load `eli5` dataset with streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/89672451?v=4",
"events_url": "https://api.github.com/users/haok1402/events{/privacy}",
"followers_url": "https://api.github.com/users/haok1402/followers",
"following_url": "https://api.github.com/users/haok1402/following{/other_user}",
"gists_url": "https://api.github.com/users/haok1402/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/haok1402",
"id": 89672451,
"login": "haok1402",
"node_id": "MDQ6VXNlcjg5NjcyNDUx",
"organizations_url": "https://api.github.com/users/haok1402/orgs",
"received_events_url": "https://api.github.com/users/haok1402/received_events",
"repos_url": "https://api.github.com/users/haok1402/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/haok1402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haok1402/subscriptions",
"type": "User",
"url": "https://api.github.com/users/haok1402",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @haok1402, I have created an issue in the Discussion tab of the corresponding dataset: https://huggingface.co/datasets/eli5/discussions/7\r\nLet's continue the discussion there!"
] | 2024-01-10T23:44:20Z
| 2024-01-11T09:19:18Z
| 2024-01-11T09:19:17Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Unable to load `eli5` dataset with streaming.
### Steps to reproduce the bug
This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions
```
from datasets import load_dataset
load_dataset("eli5", streaming=True)
```
This works correctly.
```
from datasets import load_dataset
load_dataset("eli5")
```
### Expected behavior
- Loading `eli5` dataset should not raise an error under the streaming mode.
- Or at the very least, show a warning that streaming mode is not supported with `eli5` dataset.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6579/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6579/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6831
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6831/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6831/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6831/events
|
https://github.com/huggingface/datasets/pull/6831
| 2,258,537,405
|
PR_kwDODunzps5tdTy_
| 6,831
|
Add docs about the CLI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Concretely, the docs about convert_to_parquet are here: https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831/en/cli#convert-to-parquet",
"There is an issue with the example snippet when copy/pasting it: the leading shell dollar sign is also copied. I guess they will not like to fix it in the backend: currently they only support Python code snippets (with leading `>>>` or `...`), as they appear in the IPython interactive console.\r\n\r\nWhat do you suggest, @severo?"
] | 2024-04-23T10:41:03Z
| 2024-04-26T16:51:09Z
| 2024-04-25T10:44:10Z
|
MEMBER
| null | null | null |
Add docs about the CLI.
Close #6830.
CC: @severo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6831/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6831/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6831.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6831",
"merged_at": "2024-04-25T10:44:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6831.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6831"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6733
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6733/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6733/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6733/events
|
https://github.com/huggingface/datasets/issues/6733
| 2,186,811,724
|
I_kwDODunzps6CWBlM
| 6,733
|
EmptyDatasetError when loading dataset downloaded with HuggingFace cli
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/77196999?v=4",
"events_url": "https://api.github.com/users/StwayneXG/events{/privacy}",
"followers_url": "https://api.github.com/users/StwayneXG/followers",
"following_url": "https://api.github.com/users/StwayneXG/following{/other_user}",
"gists_url": "https://api.github.com/users/StwayneXG/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/StwayneXG",
"id": 77196999,
"login": "StwayneXG",
"node_id": "MDQ6VXNlcjc3MTk2OTk5",
"organizations_url": "https://api.github.com/users/StwayneXG/orgs",
"received_events_url": "https://api.github.com/users/StwayneXG/received_events",
"repos_url": "https://api.github.com/users/StwayneXG/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/StwayneXG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StwayneXG/subscriptions",
"type": "User",
"url": "https://api.github.com/users/StwayneXG",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! `datasets` is not compatible with `huggingface_hub`'s cache structure, hence the error.\r\n\r\nYou can track https://github.com/huggingface/datasets/issues/5080 to get notified when this is implemented."
] | 2024-03-14T16:41:27Z
| 2024-03-15T18:09:02Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error:
```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None```
The dataset I'm using is "lmsys/chatbot_arena_conversations". The folder structure is
- README.md
- data
- train-00000-of-00001-cced8514c7ed782a.parquet
### Steps to reproduce the bug
1. Download dataset using HuggingFace CLI: ```huggingface-cli download lmsys/chatbot_arena_conversations --local-dir ./lmsys/chatbot_arena_conversations```
2. In Python
```
from datasets import load_dataset
load_dataset("lmsys/chatbot_arena_conversations")
```
### Expected behavior
Should return a Dataset Dict in the form of
```
DatasetDict({
train: Dataset({
features: [...],
num_rows: 33,000
})
})
```
### Environment info
Python 3.11.5
Datasets 2.18.0
Transformers 4.38.2
Pytorch 2.2.0
Pyarrow 15.0.1
Rocky Linux release 8.9 (Green Obsidian)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6733/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6733/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6110
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6110/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6110/events
|
https://github.com/huggingface/datasets/issues/6110
| 1,831,110,633
|
I_kwDODunzps5tJIfp
| 6,110
|
[BUG] Dataset initialized from in-memory data does not create cache.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MattYoon",
"id": 57797966,
"login": "MattYoon",
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MattYoon",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This is expected behavior. You must provide `cache_file_name` when performing `.map` on an in-memory dataset for the result to be cached."
] | 2023-08-01T11:58:58Z
| 2023-08-17T14:03:01Z
| 2023-08-17T14:03:00Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was run the second time so the map function can be loaded from cache if exists
from datasets import load_dataset, Dataset
dataset = load_dataset("tatsu-lab/alpaca")['train']
dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map
print(len(dataset.cache_files))
# 1
# copy the exact same data but initialize from a dictionary
memory_dataset = Dataset.from_dict({
'instruction': dataset['instruction'],
'input': dataset['input'],
'output': dataset['output'],
'text': dataset['text']})
memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map
print(len(memory_dataset.cache_files))
# Map: 100%|██████████| 52002[/52002]
# 0
```
### Expected behavior
The `map` function should create cache regardless of the method the `Dataset` was created.
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6110/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5731
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5731/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5731/events
|
https://github.com/huggingface/datasets/pull/5731
| 1,662,012,913
|
PR_kwDODunzps5N_7Un
| 5,731
|
Temporarily pin fsspec
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009735 / 0.011353 (-0.001618) | 0.010410 / 0.011008 (-0.000598) | 0.134986 / 0.038508 (0.096478) | 0.038392 / 0.023109 (0.015283) | 0.414451 / 0.275898 (0.138553) | 0.447775 / 0.323480 (0.124295) | 0.007223 / 0.007986 (-0.000763) | 0.006373 / 0.004328 (0.002045) | 0.102631 / 0.004250 (0.098381) | 0.048516 / 0.037052 (0.011464) | 0.410179 / 0.258489 (0.151690) | 0.467773 / 0.293841 (0.173932) | 0.053163 / 0.128546 (-0.075384) | 0.019801 / 0.075646 (-0.055845) | 0.452708 / 0.419271 (0.033436) | 0.068691 / 0.043533 (0.025159) | 0.405482 / 0.255139 (0.150343) | 0.457669 / 0.283200 (0.174470) | 0.113464 / 0.141683 (-0.028219) | 1.918143 / 1.452155 (0.465988) | 2.033123 / 1.492716 (0.540407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274564 / 0.018006 (0.256557) | 0.608855 / 0.000490 (0.608366) | 0.006266 / 0.000200 (0.006066) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033704 / 0.037411 (-0.003708) | 0.130982 / 0.014526 (0.116456) | 0.143862 / 0.176557 (-0.032694) | 0.212622 / 0.737135 (-0.524513) | 0.148899 / 0.296338 (-0.147439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.670968 / 0.215209 (0.455759) | 6.602911 / 2.077655 (4.525256) | 2.644290 / 1.504120 (1.140171) | 2.268593 / 1.541195 (0.727399) | 2.325393 / 1.468490 (0.856903) | 1.388156 / 4.584777 (-3.196621) | 5.958569 / 3.745712 (2.212857) | 3.310756 / 5.269862 (-1.959106) | 2.390953 / 4.565676 (-2.174724) | 0.147416 / 0.424275 (-0.276859) | 0.015201 / 0.007607 (0.007594) | 0.794109 / 0.226044 (0.568064) | 7.984855 / 2.268929 (5.715926) | 3.382275 / 55.444624 (-52.062349) | 2.676102 / 6.876477 (-4.200375) | 2.846743 / 2.142072 (0.704671) | 1.467523 / 4.805227 (-3.337704) | 0.283184 / 6.500664 (-6.217480) | 0.088655 / 0.075469 (0.013186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632765 / 1.841788 (-0.209022) | 19.102473 / 8.074308 (11.028165) | 25.632535 / 10.191392 (15.441143) | 0.255628 / 0.680424 (-0.424795) | 0.034655 / 0.534201 (-0.499546) | 0.564593 / 0.579283 (-0.014690) | 0.668339 / 0.434364 (0.233975) | 0.648414 / 0.540337 (0.108076) | 0.766735 / 1.386936 (-0.620201) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009658 / 0.011353 (-0.001695) | 0.006690 / 0.011008 (-0.004318) | 0.099151 / 0.038508 (0.060643) | 0.037092 / 0.023109 (0.013983) | 0.470354 / 0.275898 (0.194456) | 0.525863 / 0.323480 (0.202383) | 0.007593 / 0.007986 (-0.000393) | 0.006637 / 0.004328 (0.002308) | 0.098782 / 0.004250 (0.094532) | 0.058524 / 0.037052 (0.021471) | 0.502569 / 0.258489 (0.244080) | 0.526410 / 0.293841 (0.232569) | 0.059486 / 0.128546 (-0.069060) | 0.019742 / 0.075646 (-0.055904) | 0.119715 / 0.419271 (-0.299556) | 0.065269 / 0.043533 (0.021736) | 0.483327 / 0.255139 (0.228188) | 0.506148 / 0.283200 (0.222948) | 0.123178 / 0.141683 (-0.018505) | 1.916624 / 1.452155 (0.464470) | 2.051410 / 1.492716 (0.558694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286481 / 0.018006 (0.268475) | 0.597300 / 0.000490 (0.596810) | 0.008906 / 0.000200 (0.008706) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031406 / 0.037411 (-0.006005) | 0.146748 / 0.014526 (0.132222) | 0.152898 / 0.176557 (-0.023658) | 0.212535 / 0.737135 (-0.524600) | 0.155577 / 0.296338 (-0.140761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.660989 / 0.215209 (0.445780) | 6.688530 / 2.077655 (4.610875) | 3.039278 / 1.504120 (1.535159) | 2.660357 / 1.541195 (1.119162) | 2.696912 / 1.468490 (1.228422) | 1.259760 / 4.584777 (-3.325017) | 5.922452 / 3.745712 (2.176740) | 5.304200 / 5.269862 (0.034338) | 2.823928 / 4.565676 (-1.741748) | 0.148118 / 0.424275 (-0.276157) | 0.015575 / 0.007607 (0.007968) | 0.794404 / 0.226044 (0.568360) | 8.233651 / 2.268929 (5.964722) | 3.777482 / 55.444624 (-51.667142) | 3.064924 / 6.876477 (-3.811552) | 3.117803 / 2.142072 (0.975731) | 1.479559 / 4.805227 (-3.325668) | 0.254070 / 6.500664 (-6.246594) | 0.086806 / 0.075469 (0.011337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.735515 / 1.841788 (-0.106273) | 18.934157 / 8.074308 (10.859848) | 22.645248 / 10.191392 (12.453856) | 0.227073 / 0.680424 (-0.453351) | 0.030650 / 0.534201 (-0.503551) | 0.594619 / 0.579283 (0.015336) | 0.653304 / 0.434364 (0.218940) | 0.707484 / 0.540337 (0.167147) | 0.823327 / 1.386936 (-0.563610) |\n\n</details>\n</details>\n\n\n"
] | 2023-04-11T08:33:15Z
| 2023-04-11T08:57:45Z
| 2023-04-11T08:47:55Z
|
MEMBER
| null | null | null |
Fix #5730.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5731/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5731.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5731",
"merged_at": "2023-04-11T08:47:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5731.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5731"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4825
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4825/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4825/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4825/events
|
https://github.com/huggingface/datasets/pull/4825
| 1,335,856,882
|
PR_kwDODunzps49BYWL
| 4,825
|
[Windows] Fix Access Denied when using os.rename()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8703022?v=4",
"events_url": "https://api.github.com/users/DougTrajano/events{/privacy}",
"followers_url": "https://api.github.com/users/DougTrajano/followers",
"following_url": "https://api.github.com/users/DougTrajano/following{/other_user}",
"gists_url": "https://api.github.com/users/DougTrajano/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DougTrajano",
"id": 8703022,
"login": "DougTrajano",
"node_id": "MDQ6VXNlcjg3MDMwMjI=",
"organizations_url": "https://api.github.com/users/DougTrajano/orgs",
"received_events_url": "https://api.github.com/users/DougTrajano/received_events",
"repos_url": "https://api.github.com/users/DougTrajano/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DougTrajano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DougTrajano/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DougTrajano",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?",
"> Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?\r\n\r\nYes, I think that could be a better solution, but I didn't test it in Linux (e.g. Ubuntu) to guarantee that `os.rename()` could be completely replaced by `shutil.move()`.",
"AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)",
"> AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)\r\n\r\nalright, let me change the PR then.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4825). All of your documentation changes will be reflected on that endpoint.",
"Hi @lhoestq looks like one of the tests failed, but is not related to this change, do I need to do something from my side?"
] | 2022-08-11T11:57:15Z
| 2022-08-24T13:09:07Z
| 2022-08-24T13:09:07Z
|
CONTRIBUTOR
| null | null | null |
In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4825/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4825/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4825",
"merged_at": "2022-08-24T13:09:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4825"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6859
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6859/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6859/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6859/events
|
https://github.com/huggingface/datasets/pull/6859
| 2,274,996,774
|
PR_kwDODunzps5uVIoZ
| 6,859
|
Support folder-based datasets with large metadata.jsonl
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/580564?v=4",
"events_url": "https://api.github.com/users/gbenson/events{/privacy}",
"followers_url": "https://api.github.com/users/gbenson/followers",
"following_url": "https://api.github.com/users/gbenson/following{/other_user}",
"gists_url": "https://api.github.com/users/gbenson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gbenson",
"id": 580564,
"login": "gbenson",
"node_id": "MDQ6VXNlcjU4MDU2NA==",
"organizations_url": "https://api.github.com/users/gbenson/orgs",
"received_events_url": "https://api.github.com/users/gbenson/received_events",
"repos_url": "https://api.github.com/users/gbenson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gbenson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbenson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gbenson",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-05-02T09:07:26Z
| 2024-05-02T09:07:26Z
| null |
NONE
| null | null | null |
I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests.
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="data-for-upload")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
...
File "/path/to/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 245, in _read_metadata
return paj.read_json(f)
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6859/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6859/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6859.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6859",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6859.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6859"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7526
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7526/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7526/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7526/events
|
https://github.com/huggingface/datasets/issues/7526
| 3,005,107,536
|
I_kwDODunzps6zHk1Q
| 7,526
|
[WIP] Faster downloads/uploads with Xet storage
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-04-18T14:46:42Z
| 2025-04-18T14:50:40Z
| null |
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|

Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface.co/posts/jsulz/911431940353906).
See more information on the HF blog: https://huggingface.co/blog/xet-on-the-hub
You can already enable Xet on Hugging Face account to benefit from faster downloads and uploads :)
We’re finalizing an official integration with the `huggingface_hub`library that will mean you get the benefits of Xet without any significant changes to your current workflow. In the meantime you might see this warning in `push_to_hub()`:
```
Uploading files as bytes or binary IO objects is not supported by Xet Storage.
```
This means the `huggingface_hub` + Xet integration isn't enabled for `datasets` yet. Stay tuned !
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7526/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7526/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5326
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5326/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5326/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5326/events
|
https://github.com/huggingface/datasets/issues/5326
| 1,471,634,168
|
I_kwDODunzps5Xt1r4
| 5,326
|
No documentation for main branch is built
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2022-12-01T16:50:58Z
| 2022-12-02T16:26:01Z
| 2022-12-02T16:26:01Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Since:
- #5250
- Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6
the docs for main branch are no longer built.
The change introduced only triggers the docs building for releases.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5326/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5326/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7467
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7467/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7467/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7467/events
|
https://github.com/huggingface/datasets/issues/7467
| 2,930,067,107
|
I_kwDODunzps6upUaj
| 7,467
|
load_dataset with streaming hangs on parquet datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10550252?v=4",
"events_url": "https://api.github.com/users/The0nix/events{/privacy}",
"followers_url": "https://api.github.com/users/The0nix/followers",
"following_url": "https://api.github.com/users/The0nix/following{/other_user}",
"gists_url": "https://api.github.com/users/The0nix/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The0nix",
"id": 10550252,
"login": "The0nix",
"node_id": "MDQ6VXNlcjEwNTUwMjUy",
"organizations_url": "https://api.github.com/users/The0nix/orgs",
"received_events_url": "https://api.github.com/users/The0nix/received_events",
"repos_url": "https://api.github.com/users/The0nix/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The0nix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The0nix/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The0nix",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! The issue comes from `pyarrow`, I reported it here: https://github.com/apache/arrow/issues/45214 (feel free to comment / thumb up).\n\nAlternatively we can try to find something else than `ParquetFileFragment.to_batches()` to iterate on Parquet data and keep the option the pass `filters=`..."
] | 2025-03-18T23:33:54Z
| 2025-03-25T10:28:04Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming=True, split="train")
it = iter(dataset)
next(it)
print('Finish')
```
The program prints finish but doesn't exit and hangs indefinitely.
I tried this on two different machines and several datasets.
### Expected behavior
The program exits successfully
### Environment info
datasets==3.4.1
Python 3.12.9.
MacOS and Ubuntu Linux
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7467/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7467/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6538/events
|
https://github.com/huggingface/datasets/issues/6538
| 2,057,377,630
|
I_kwDODunzps56oRde
| 6,538
|
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4",
"events_url": "https://api.github.com/users/Sonali-Behera-TRT/events{/privacy}",
"followers_url": "https://api.github.com/users/Sonali-Behera-TRT/followers",
"following_url": "https://api.github.com/users/Sonali-Behera-TRT/following{/other_user}",
"gists_url": "https://api.github.com/users/Sonali-Behera-TRT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sonali-Behera-TRT",
"id": 131662185,
"login": "Sonali-Behera-TRT",
"node_id": "U_kgDOB9kBaQ",
"organizations_url": "https://api.github.com/users/Sonali-Behera-TRT/orgs",
"received_events_url": "https://api.github.com/users/Sonali-Behera-TRT/received_events",
"repos_url": "https://api.github.com/users/Sonali-Behera-TRT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sonali-Behera-TRT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sonali-Behera-TRT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sonali-Behera-TRT",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error",
"I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?",
"I have the same issue now and didn't have this problem around 2 weeks ago.",
"> Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error\r\n\r\nYes, I am sure\r\n\r\n```\r\n!pip show datasets\r\nName: datasets\r\nVersion: 2.16.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache 2.0\r\nLocation: /opt/conda/lib/python3.10/site-packages\r\nRequires: aiohttp, dill, filelock, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, pyarrow-hotfix, pyyaml, requests, tqdm, xxhash\r\nRequired-by: trl\r\n```",
"> I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?\r\n\r\nDon't know about other people. But I am having this issue whose solution I can't find anywhere. And this issue still persists. ",
"> I have the same issue now and didn't have this problem around 2 weeks ago.\r\n\r\nSame here",
"I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n",
"> I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n\r\nI also have datasets version 2.16, but the error is still there.",
"Can you try re-installing `datasets` ?",
"> Can you try re-installing `datasets` ?\r\n\r\nI tried re-installing. Still getting the same error. \r\n",
"> > Can you try re-installing `datasets` ?\r\n> \r\n> I tried re-installing. Still getting the same error.\r\n\r\nIn kaggle I used:\r\n- `%pip install -U datasets`\r\nand then restarted runtime and then everything works fine.",
"> > > Can you try re-installing `datasets` ?\r\n> > \r\n> > \r\n> > I tried re-installing. Still getting the same error.\r\n> \r\n> In kaggle I used:\r\n> \r\n> * `%pip install -U datasets`\r\n> and then restarted runtime and then everything works fine.\r\n\r\nYes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?",
"> > > > Can you try re-installing `datasets` ?\r\n> > > \r\n> > > \r\n> > > I tried re-installing. Still getting the same error.\r\n> > \r\n> > \r\n> > In kaggle I used:\r\n> > \r\n> > * `%pip install -U datasets`\r\n> > and then restarted runtime and then everything works fine.\r\n> \r\n> Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\nFor some packages it is required.\r\nhttps://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n",
"> > > > > Can you try re-installing `datasets` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried re-installing. Still getting the same error.\r\n> > > \r\n> > > \r\n> > > In kaggle I used:\r\n> > > \r\n> > > * `%pip install -U datasets`\r\n> > > and then restarted runtime and then everything works fine.\r\n> > \r\n> > \r\n> > Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\n> > For some packages it is required.\r\n> > https://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n\r\nThank you for your assistance. I dedicated the past 2-3 weeks to resolving this issue. Interestingly, it runs flawlessly in Colab without requiring a runtime restart. However, the problem persisted exclusively in Kaggle. I appreciate your help once again. Thank you.",
"Closing this issue as it is not related to the datasets library; rather, it's linked to platform-related issues."
] | 2023-12-27T13:31:16Z
| 2024-01-03T10:06:47Z
| 2024-01-03T10:04:58Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
from huggingface_hub import login
import pandas as pd
```
Error:
````
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[5], line 14
4 from transformers import (
5 AutoModelForCausalLM,
6 AutoTokenizer,
(...)
11 logging
12 )
13 from peft import LoraConfig, PeftModel
---> 14 from trl import SFTTrainer
15 from huggingface_hub import login
16 import pandas as pd
File /opt/conda/lib/python3.10/site-packages/trl/__init__.py:21
8 from .import_utils import (
9 is_diffusers_available,
10 is_npu_available,
(...)
13 is_xpu_available,
14 )
15 from .models import (
16 AutoModelForCausalLMWithValueHead,
17 AutoModelForSeq2SeqLMWithValueHead,
18 PreTrainedModelWrapper,
19 create_reference_model,
20 )
---> 21 from .trainer import (
22 DataCollatorForCompletionOnlyLM,
23 DPOTrainer,
24 IterativeSFTTrainer,
25 PPOConfig,
26 PPOTrainer,
27 RewardConfig,
28 RewardTrainer,
29 SFTTrainer,
30 )
33 if is_diffusers_available():
34 from .models import (
35 DDPOPipelineOutput,
36 DDPOSchedulerOutput,
37 DDPOStableDiffusionPipeline,
38 DefaultDDPOStableDiffusionPipeline,
39 )
File /opt/conda/lib/python3.10/site-packages/trl/trainer/__init__.py:44
42 from .ppo_trainer import PPOTrainer
43 from .reward_trainer import RewardTrainer, compute_accuracy
---> 44 from .sft_trainer import SFTTrainer
45 from .training_configs import RewardConfig
File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:23
21 import torch.nn as nn
22 from datasets import Dataset
---> 23 from datasets.arrow_writer import SchemaInferenceError
24 from datasets.builder import DatasetGenerationError
25 from transformers import (
26 AutoModelForCausalLM,
27 AutoTokenizer,
(...)
33 TrainingArguments,
34 )
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py
````
transformers version: 4.36.2
python version: 3.10.12
datasets version: 2.16.1
### Steps to reproduce the bug
1. Install packages
```
!pip install -U datasets trl accelerate peft bitsandbytes transformers trl huggingface_hub
```
2. import packages
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
from huggingface_hub import login
import pandas as pd
```
### Expected behavior
No error while importing
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.1
- PyArrow version: 11.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4",
"events_url": "https://api.github.com/users/Sonali-Behera-TRT/events{/privacy}",
"followers_url": "https://api.github.com/users/Sonali-Behera-TRT/followers",
"following_url": "https://api.github.com/users/Sonali-Behera-TRT/following{/other_user}",
"gists_url": "https://api.github.com/users/Sonali-Behera-TRT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sonali-Behera-TRT",
"id": 131662185,
"login": "Sonali-Behera-TRT",
"node_id": "U_kgDOB9kBaQ",
"organizations_url": "https://api.github.com/users/Sonali-Behera-TRT/orgs",
"received_events_url": "https://api.github.com/users/Sonali-Behera-TRT/received_events",
"repos_url": "https://api.github.com/users/Sonali-Behera-TRT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sonali-Behera-TRT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sonali-Behera-TRT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sonali-Behera-TRT",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6538/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6538/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5244
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5244/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5244/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5244/events
|
https://github.com/huggingface/datasets/issues/5244
| 1,450,019,225
|
I_kwDODunzps5WbYmZ
| 5,244
|
Allow dataset streaming from private a private source when loading a dataset with a dataset loading script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/bruno-hays/events{/privacy}",
"followers_url": "https://api.github.com/users/bruno-hays/followers",
"following_url": "https://api.github.com/users/bruno-hays/following{/other_user}",
"gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bruno-hays",
"id": 48770768,
"login": "bruno-hays",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/bruno-hays/orgs",
"received_events_url": "https://api.github.com/users/bruno-hays/received_events",
"repos_url": "https://api.github.com/users/bruno-hays/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bruno-hays",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! What kind of private source ? We're exploring adding support for cloud storage and URIs like s3://, gs:// etc. with authentication in the download manager",
"Hello! It's a google cloud storage, so gs://, but I'm using it with https.\r\nBeing able to provide a file system like [here](https://huggingface.co/docs/datasets/main/filesystems#load-serialized-datasets) would be even more practical indeed.\r\nI've found a quite complicated workaround which consists of monkey patching all of the functions in streaming_download_manager.py to use my own _get_authentication_headers_for_url_ . \r\n\r\nA support for this use case would be greatly appreciated!\r\n\r\nFor reference my _get_authentication_headers_for_url_ looks like this:\r\n```\r\nimport os\r\nfrom typing import Optional, Union\r\n\r\nfrom datasets import config\r\nfrom huggingface_hub import HfFolder\r\nfrom gcsfs.credentials import GoogleCredentials\r\n\r\nDEFAULT_PROJECT = os.environ.get(\"GCSFS_DEFAULT_PROJECT\", \"\")\r\naccess = \"full_control\"\r\ngcs_token = os.environ.get(\"GCS_TOKEN\")\r\n\r\n\r\ndef get_authentication_headers_for_url(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> dict:\r\n \"\"\"Handle the HF authentication\"\"\"\r\n headers = {}\r\n if url.startswith(config.HF_ENDPOINT):\r\n if use_auth_token is False:\r\n token = None\r\n elif isinstance(use_auth_token, str):\r\n token = use_auth_token\r\n else:\r\n token = HfFolder.get_token()\r\n elif url.startswith(\"https://storage.googleapis.com\"):\r\n credentials = GoogleCredentials(DEFAULT_PROJECT, access, gcs_token)\r\n credentials.maybe_refresh()\r\n token = credentials.credentials.token\r\n else:\r\n token = None\r\n if token:\r\n headers[\"authorization\"] = f\"Bearer {token}\"\r\n return headers\r\n```",
"I would be a big fan of this feature! @Hubert-Bonisseur if this doesn't become a supported feature, would you mind sharing your code? Thanks!",
"> I would be a big fan of this feature! @Hubert-Bonisseur if this doesn't become a supported feature, would you mind sharing your code? Thanks!\r\n\r\nI published it here:\r\nhttps://github.com/Hubert-Bonisseur/private-dataset-hub\r\n\r\nI modified the names of a lot of functions for privacy and I don't have time to test it again so you may get import errors, but you have the code. The custom_load_dataset is the function you are interested in I think.\r\n\r\nIt relies a lot on patching, if you find a better way to do this, I'd be interested.",
"Given the amount of patching it does, this is likely to break at one point. I'd encourage you to wait for a proper support in `datasets` directly if you can wait."
] | 2022-11-15T16:02:10Z
| 2022-11-23T14:02:30Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source.
It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_manager
### Motivation
It is possible to share a dataset hosted on another platform by writing a dataset loading script. It works perfectly for publicly available resources.
For resources that require authentication, you can provide a [download_custom](https://huggingface.co/docs/datasets/package_reference/builder_classes#datasets.DownloadManager) method to the download_manager.
Unfortunately, this function doesn't work with **dataset streaming**.
A solution so as to allow dataset streaming from private sources would be a more flexible _get_authentication_headers_for_url_ function.
### Your contribution
Would you be interested in this improvement ?
If so I could provide a PR. I've got something working locally, but it's not very clean, I'd need some guidance regarding integration.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5244/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5244/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4547
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4547/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4547/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4547/events
|
https://github.com/huggingface/datasets/pull/4547
| 1,282,160,517
|
PR_kwDODunzps46Ot5u
| 4,547
|
[CI] Fix some warnings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR",
"good catch, I thought I resolved them all sorry",
"Alright it should be good now"
] | 2022-06-23T10:10:49Z
| 2022-06-28T14:10:57Z
| 2022-06-28T13:59:54Z
|
MEMBER
| null | null | null |
There are some warnings in the CI that are annoying, I tried to remove most of them
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4547/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4547/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4547",
"merged_at": "2022-06-28T13:59:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4547"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6734
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6734/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6734/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6734/events
|
https://github.com/huggingface/datasets/issues/6734
| 2,187,646,694
|
I_kwDODunzps6CZNbm
| 6,734
|
Tokenization slows towards end of dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4",
"events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}",
"followers_url": "https://api.github.com/users/ethansmith2000/followers",
"following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}",
"gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ethansmith2000",
"id": 98723285,
"login": "ethansmith2000",
"node_id": "U_kgDOBeJl1Q",
"organizations_url": "https://api.github.com/users/ethansmith2000/orgs",
"received_events_url": "https://api.github.com/users/ethansmith2000/received_events",
"repos_url": "https://api.github.com/users/ethansmith2000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ethansmith2000",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! First note that if the dataset is not heterogeneous / shuffled, there might be places in the data with shorter texts that are faster to tokenize.\r\n\r\nMoreover, the way `num_proc` works is by slicing the dataset and passing each slice to a process to run the `map()` function. So at the very end of `map()`, some processes might have finished transforming their slice of data while others are still running, causing the throughput to become lower.",
"I did see some comments about how num_proc=None could help and outputting numpy arrays can also help in the docs, but this seems quite odd now dropping down to 1it/s\r\n\r\n```bash\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46048888/46390354 [12:33:30<4:20:32, 21.84 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46049888/46390354 [12:36:11<8:37:59, 10.95 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46050888/46390354 [12:46:35<24:56:56, 3.78 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46051888/46390354 [12:56:43<35:08:10, 2.68 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46052888/46390354 [13:06:58<42:05:41, 2.23 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46053888/46390354 [13:16:01<44:40:18, 2.09 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46054888/46390354 [13:25:11<46:35:28, 2.00 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46055888/46390354 [13:34:23<47:55:34, 1.94 examples/s]\r\n```\r\n\r\n",
"@ethansmith2000 Hi, did you solve this problem? I'm strugging with the same problem now.",
"So, is there a way to solve this problem now?"
] | 2024-03-15T03:27:36Z
| 2025-02-20T17:40:54Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Mapped tokenization slows down substantially towards end of dataset.
train set started off very slow, caught up to 20k then tapered off til the end.
what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted downloads, and the speed ups/downs consistently happened the same times
```bash
Running tokenizer on dataset (num_proc=48): 0%| | 847000/881416735 [12:18<252:45:45, 967.72 examples/s]
Running tokenizer on dataset (num_proc=48): 0%| | 848000/881416735 [12:19<224:16:10, 1090.66 examples/s]
Running tokenizer on dataset (num_proc=48): 10%|▉ | 84964000/881416735 [3:48:00<11:21:34, 19476.01 examples/s]
Running tokenizer on dataset (num_proc=48): 10%|▉ | 84967000/881416735 [3:48:00<12:04:01, 18333.79 examples/s]
Running tokenizer on dataset (num_proc=48): 61%|██████ | 538631977/881416735 [13:46:40<27:50:04, 3420.84 examples/s]
Running tokenizer on dataset (num_proc=48): 61%|██████ | 538632977/881416735 [13:46:40<23:48:20, 3999.77 examples/s]
Running tokenizer on dataset (num_proc=48): 100%|█████████▉| 881365886/881416735 [38:30:19<04:34, 185.10 examples/s]
Running tokenizer on dataset (num_proc=48): 100%|█████████▉| 881366886/881416735 [38:30:25<04:36, 180.57 examples/s]
```
and validation set as well
```bash
Running tokenizer on dataset (num_proc=48): 90%|████████▉ | 41544000/46390354 [28:44<02:37, 30798.76 examples/s]
Running tokenizer on dataset (num_proc=48): 90%|████████▉ | 41550000/46390354 [28:44<02:08, 37698.08 examples/s]
Running tokenizer on dataset (num_proc=48): 96%|█████████▋| 44747422/46390354 [2:15:48<12:22:44, 36.87 examples/s]
Running tokenizer on dataset (num_proc=48): 96%|█████████▋| 44747422/46390354 [2:16:00<12:22:44, 36.87 examples/s]
```
### Steps to reproduce the bug
using the following kwargs
```python
with accelerator.main_process_first():
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=48
load_from_cache_file=True,
desc=f"Grouping texts in chunks of {block_size}",
)
```
running through slurm script
```bash
#SBATCH --partition=gpu-nvidia-a100
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gpus-per-task=8
#SBATCH --cpus-per-task=96
```
using this dataset https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T
### Expected behavior
Constant speed throughout
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.0-1049-aws-x86_64-with-glibc2.10
- Python version: 3.8.18
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6734/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6734/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4809
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4809/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4809/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4809/events
|
https://github.com/huggingface/datasets/pull/4809
| 1,332,842,747
|
PR_kwDODunzps483Y4h
| 4,809
|
Complete the mlqa dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"events_url": "https://api.github.com/users/el2e10/events{/privacy}",
"followers_url": "https://api.github.com/users/el2e10/followers",
"following_url": "https://api.github.com/users/el2e10/following{/other_user}",
"gists_url": "https://api.github.com/users/el2e10/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/el2e10",
"id": 7940237,
"login": "el2e10",
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"organizations_url": "https://api.github.com/users/el2e10/orgs",
"received_events_url": "https://api.github.com/users/el2e10/received_events",
"repos_url": "https://api.github.com/users/el2e10/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/el2e10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/el2e10/subscriptions",
"type": "User",
"url": "https://api.github.com/users/el2e10",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your contribution, @eldhoittangeorge.\r\n> \r\n> The CI error message: https://github.com/huggingface/datasets/runs/7743526624?check_suite_focus=true\r\n> \r\n> ```\r\n> E ValueError: The following issues have been found in the dataset cards:\r\n> E YAML tags:\r\n> E __init__() missing 5 required positional arguments: 'annotations_creators', 'language_creators', 'license', 'size_categories', and 'source_datasets'\r\n> ```\r\n\r\nI will fix the CI error.",
"@eldhoittangeorge, thanks again for all the fixes. Just a minor one before we can merge this PR: https://github.com/huggingface/datasets/runs/7744885754?check_suite_focus=true\r\n```\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language_creators':\r\nE \t['unknown'] are not registered tags for 'language_creators', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/creators.json\r\n```",
"> \r\n\r\nThanks, I updated the file. \r\nA small suggestion can you mention this link https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/ in the contribution page. So that others will know the acceptable values for the tags."
] | 2022-08-09T07:38:06Z
| 2022-08-09T16:26:21Z
| 2022-08-09T13:26:43Z
|
CONTRIBUTOR
| null | null | null |
I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4809/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4809/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4809",
"merged_at": "2022-08-09T13:26:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4809"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7077
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7077/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7077/events
|
https://github.com/huggingface/datasets/issues/7077
| 2,432,345,489
|
I_kwDODunzps6Q-qWR
| 7,077
|
column_names ignored by load_dataset() when loading CSV file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4",
"events_url": "https://api.github.com/users/luismsgomes/events{/privacy}",
"followers_url": "https://api.github.com/users/luismsgomes/followers",
"following_url": "https://api.github.com/users/luismsgomes/following{/other_user}",
"gists_url": "https://api.github.com/users/luismsgomes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/luismsgomes",
"id": 9130265,
"login": "luismsgomes",
"node_id": "MDQ6VXNlcjkxMzAyNjU=",
"organizations_url": "https://api.github.com/users/luismsgomes/orgs",
"received_events_url": "https://api.github.com/users/luismsgomes/received_events",
"repos_url": "https://api.github.com/users/luismsgomes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/luismsgomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luismsgomes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/luismsgomes",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug if you pass `names` instead of `column_names`."
] | 2024-07-26T14:18:04Z
| 2024-07-30T07:52:26Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- `huggingface_hub` version: 0.24.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7077/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4857
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4857/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4857/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4857/events
|
https://github.com/huggingface/datasets/issues/4857
| 1,340,397,153
|
I_kwDODunzps5P5NZh
| 4,857
|
No preprocessed wikipedia is working on huggingface/datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30733039?v=4",
"events_url": "https://api.github.com/users/aninrusimha/events{/privacy}",
"followers_url": "https://api.github.com/users/aninrusimha/followers",
"following_url": "https://api.github.com/users/aninrusimha/following{/other_user}",
"gists_url": "https://api.github.com/users/aninrusimha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aninrusimha",
"id": 30733039,
"login": "aninrusimha",
"node_id": "MDQ6VXNlcjMwNzMzMDM5",
"organizations_url": "https://api.github.com/users/aninrusimha/orgs",
"received_events_url": "https://api.github.com/users/aninrusimha/received_events",
"repos_url": "https://api.github.com/users/aninrusimha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aninrusimha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aninrusimha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aninrusimha",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @aninrusimha.\r\n\r\nPlease, note that the preprocessed datasets are still available, as described in the dataset card, e.g.: https://huggingface.co/datasets/wikipedia\r\n```python\r\nds = load_dataset(\"wikipedia\", \"20220301.en\")\r\n``` ",
"This is working now, but I was getting an error a few days ago when running an existing script. Unfortunately I did not do a proper bug report, but for some reason I was unable to load the dataset due to a request being made to the wikimedia website. However, its working now. Thanks for the reply!"
] | 2022-08-16T13:55:33Z
| 2022-08-17T13:35:08Z
| 2022-08-17T13:35:08Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface
https://huggingface.co/datasets/wikipedia
https://dumps.wikimedia.org/enwiki/
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30733039?v=4",
"events_url": "https://api.github.com/users/aninrusimha/events{/privacy}",
"followers_url": "https://api.github.com/users/aninrusimha/followers",
"following_url": "https://api.github.com/users/aninrusimha/following{/other_user}",
"gists_url": "https://api.github.com/users/aninrusimha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aninrusimha",
"id": 30733039,
"login": "aninrusimha",
"node_id": "MDQ6VXNlcjMwNzMzMDM5",
"organizations_url": "https://api.github.com/users/aninrusimha/orgs",
"received_events_url": "https://api.github.com/users/aninrusimha/received_events",
"repos_url": "https://api.github.com/users/aninrusimha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aninrusimha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aninrusimha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aninrusimha",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4857/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4857/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6418
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6418/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6418/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6418/events
|
https://github.com/huggingface/datasets/pull/6418
| 1,993,224,629
|
PR_kwDODunzps5fb7lu
| 6,418
|
Remove token value from warnings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.002950 / 0.011008 (-0.008058) | 0.062316 / 0.038508 (0.023808) | 0.030068 / 0.023109 (0.006959) | 0.251998 / 0.275898 (-0.023900) | 0.274806 / 0.323480 (-0.048674) | 0.003067 / 0.007986 (-0.004919) | 0.003082 / 0.004328 (-0.001247) | 0.048503 / 0.004250 (0.044253) | 0.045167 / 0.037052 (0.008114) | 0.254277 / 0.258489 (-0.004212) | 0.290528 / 0.293841 (-0.003313) | 0.023666 / 0.128546 (-0.104880) | 0.007049 / 0.075646 (-0.068597) | 0.202367 / 0.419271 (-0.216905) | 0.056291 / 0.043533 (0.012758) | 0.251923 / 0.255139 (-0.003216) | 0.273595 / 0.283200 (-0.009605) | 0.019065 / 0.141683 (-0.122618) | 1.100832 / 1.452155 (-0.351322) | 1.266758 / 1.492716 (-0.225959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094311 / 0.018006 (0.076305) | 0.303199 / 0.000490 (0.302709) | 0.000238 / 0.000200 (0.000039) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019413 / 0.037411 (-0.017999) | 0.062618 / 0.014526 (0.048092) | 0.072850 / 0.176557 (-0.103707) | 0.119124 / 0.737135 (-0.618012) | 0.074044 / 0.296338 (-0.222294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273660 / 0.215209 (0.058451) | 2.682371 / 2.077655 (0.604716) | 1.426041 / 1.504120 (-0.078079) | 1.317186 / 1.541195 (-0.224009) | 1.332385 / 1.468490 (-0.136106) | 0.394599 / 4.584777 (-4.190178) | 2.368167 / 3.745712 (-1.377545) | 2.683728 / 5.269862 (-2.586134) | 1.668348 / 4.565676 (-2.897329) | 0.046177 / 0.424275 (-0.378098) | 0.004833 / 0.007607 (-0.002774) | 0.331413 / 0.226044 (0.105369) | 3.278984 / 2.268929 (1.010055) | 1.797600 / 55.444624 (-53.647024) | 1.492202 / 6.876477 (-5.384274) | 1.536039 / 2.142072 (-0.606034) | 0.470601 / 4.805227 (-4.334626) | 0.100833 / 6.500664 (-6.399831) | 0.042787 / 0.075469 (-0.032682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959036 / 1.841788 (-0.882752) | 11.632956 / 8.074308 (3.558648) | 10.384574 / 10.191392 (0.193182) | 0.127477 / 0.680424 (-0.552946) | 0.014072 / 0.534201 (-0.520129) | 0.269534 / 0.579283 (-0.309749) | 0.259753 / 0.434364 (-0.174611) | 0.313450 / 0.540337 (-0.226888) | 0.431799 / 1.386936 (-0.955137) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004964 / 0.011353 (-0.006389) | 0.002906 / 0.011008 (-0.008102) | 0.048145 / 0.038508 (0.009637) | 0.056457 / 0.023109 (0.033348) | 0.274131 / 0.275898 (-0.001767) | 0.298534 / 0.323480 (-0.024946) | 0.004145 / 0.007986 (-0.003841) | 0.002415 / 0.004328 (-0.001913) | 0.048558 / 0.004250 (0.044308) | 0.039031 / 0.037052 (0.001978) | 0.278948 / 0.258489 (0.020459) | 0.312358 / 0.293841 (0.018517) | 0.024902 / 0.128546 (-0.103645) | 0.007286 / 0.075646 (-0.068360) | 0.053839 / 0.419271 (-0.365433) | 0.032510 / 0.043533 (-0.011023) | 0.272023 / 0.255139 (0.016884) | 0.293420 / 0.283200 (0.010221) | 0.018932 / 0.141683 (-0.122750) | 1.122792 / 1.452155 (-0.329362) | 1.167385 / 1.492716 (-0.325331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094574 / 0.018006 (0.076567) | 0.303810 / 0.000490 (0.303321) | 0.000227 / 0.000200 (0.000027) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021675 / 0.037411 (-0.015737) | 0.070289 / 0.014526 (0.055763) | 0.080345 / 0.176557 (-0.096211) | 0.120220 / 0.737135 (-0.616915) | 0.084080 / 0.296338 (-0.212259) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300134 / 0.215209 (0.084925) | 2.945831 / 2.077655 (0.868176) | 1.605303 / 1.504120 (0.101183) | 1.480135 / 1.541195 (-0.061059) | 1.526039 / 1.468490 (0.057549) | 0.398264 / 4.584777 (-4.186512) | 2.461391 / 3.745712 (-1.284321) | 2.559929 / 5.269862 (-2.709933) | 1.541391 / 4.565676 (-3.024286) | 0.045319 / 0.424275 (-0.378957) | 0.004834 / 0.007607 (-0.002773) | 0.352186 / 0.226044 (0.126141) | 3.500108 / 2.268929 (1.231180) | 1.966394 / 55.444624 (-53.478230) | 1.675500 / 6.876477 (-5.200977) | 1.683134 / 2.142072 (-0.458938) | 0.465085 / 4.805227 (-4.340142) | 0.097235 / 6.500664 (-6.403429) | 0.040764 / 0.075469 (-0.034705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982813 / 1.841788 (-0.858975) | 12.382529 / 8.074308 (4.308221) | 11.082660 / 10.191392 (0.891268) | 0.129113 / 0.680424 (-0.551310) | 0.015718 / 0.534201 (-0.518483) | 0.272776 / 0.579283 (-0.306507) | 0.275513 / 0.434364 (-0.158850) | 0.304933 / 0.540337 (-0.235404) | 0.414591 / 1.386936 (-0.972345) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004400 / 0.011353 (-0.006953) | 0.002580 / 0.011008 (-0.008428) | 0.060975 / 0.038508 (0.022467) | 0.029337 / 0.023109 (0.006228) | 0.248643 / 0.275898 (-0.027255) | 0.274476 / 0.323480 (-0.049004) | 0.003925 / 0.007986 (-0.004061) | 0.002332 / 0.004328 (-0.001997) | 0.049501 / 0.004250 (0.045251) | 0.042730 / 0.037052 (0.005678) | 0.255823 / 0.258489 (-0.002666) | 0.281748 / 0.293841 (-0.012093) | 0.023118 / 0.128546 (-0.105428) | 0.006957 / 0.075646 (-0.068690) | 0.201630 / 0.419271 (-0.217641) | 0.054258 / 0.043533 (0.010725) | 0.252289 / 0.255139 (-0.002850) | 0.267561 / 0.283200 (-0.015639) | 0.016903 / 0.141683 (-0.124780) | 1.104322 / 1.452155 (-0.347833) | 1.160027 / 1.492716 (-0.332689) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096340 / 0.018006 (0.078333) | 0.305187 / 0.000490 (0.304697) | 0.000222 / 0.000200 (0.000022) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018733 / 0.037411 (-0.018678) | 0.062382 / 0.014526 (0.047856) | 0.072309 / 0.176557 (-0.104248) | 0.119772 / 0.737135 (-0.617364) | 0.074655 / 0.296338 (-0.221683) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286150 / 0.215209 (0.070941) | 2.770328 / 2.077655 (0.692673) | 1.494593 / 1.504120 (-0.009527) | 1.358611 / 1.541195 (-0.182583) | 1.396308 / 1.468490 (-0.072182) | 0.394806 / 4.584777 (-4.189971) | 2.349100 / 3.745712 (-1.396613) | 2.600541 / 5.269862 (-2.669321) | 1.568975 / 4.565676 (-2.996701) | 0.046212 / 0.424275 (-0.378063) | 0.004821 / 0.007607 (-0.002786) | 0.332286 / 0.226044 (0.106242) | 3.302643 / 2.268929 (1.033714) | 1.838992 / 55.444624 (-53.605633) | 1.571919 / 6.876477 (-5.304557) | 1.574956 / 2.142072 (-0.567117) | 0.464156 / 4.805227 (-4.341071) | 0.097983 / 6.500664 (-6.402681) | 0.042243 / 0.075469 (-0.033226) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941675 / 1.841788 (-0.900113) | 11.450326 / 8.074308 (3.376017) | 10.169943 / 10.191392 (-0.021449) | 0.137879 / 0.680424 (-0.542545) | 0.013765 / 0.534201 (-0.520436) | 0.268633 / 0.579283 (-0.310650) | 0.265083 / 0.434364 (-0.169281) | 0.302099 / 0.540337 (-0.238238) | 0.423033 / 1.386936 (-0.963903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004998 / 0.011353 (-0.006355) | 0.003174 / 0.011008 (-0.007834) | 0.047924 / 0.038508 (0.009416) | 0.057598 / 0.023109 (0.034489) | 0.278823 / 0.275898 (0.002925) | 0.334349 / 0.323480 (0.010869) | 0.004053 / 0.007986 (-0.003932) | 0.002554 / 0.004328 (-0.001774) | 0.047797 / 0.004250 (0.043547) | 0.039802 / 0.037052 (0.002749) | 0.278295 / 0.258489 (0.019806) | 0.319597 / 0.293841 (0.025757) | 0.024802 / 0.128546 (-0.103744) | 0.007362 / 0.075646 (-0.068284) | 0.066983 / 0.419271 (-0.352288) | 0.032707 / 0.043533 (-0.010826) | 0.277350 / 0.255139 (0.022211) | 0.296829 / 0.283200 (0.013629) | 0.017902 / 0.141683 (-0.123781) | 1.129765 / 1.452155 (-0.322390) | 1.201940 / 1.492716 (-0.290777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095631 / 0.018006 (0.077625) | 0.296999 / 0.000490 (0.296510) | 0.000234 / 0.000200 (0.000034) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021547 / 0.037411 (-0.015865) | 0.070003 / 0.014526 (0.055477) | 0.083173 / 0.176557 (-0.093384) | 0.121676 / 0.737135 (-0.615459) | 0.082974 / 0.296338 (-0.213364) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298982 / 0.215209 (0.083773) | 2.918666 / 2.077655 (0.841011) | 1.582054 / 1.504120 (0.077934) | 1.463804 / 1.541195 (-0.077391) | 1.484384 / 1.468490 (0.015893) | 0.399443 / 4.584777 (-4.185334) | 2.393515 / 3.745712 (-1.352197) | 2.533004 / 5.269862 (-2.736858) | 1.490411 / 4.565676 (-3.075266) | 0.045274 / 0.424275 (-0.379002) | 0.004783 / 0.007607 (-0.002824) | 0.350510 / 0.226044 (0.124465) | 3.437927 / 2.268929 (1.168998) | 1.940115 / 55.444624 (-53.504509) | 1.662025 / 6.876477 (-5.214452) | 1.640621 / 2.142072 (-0.501452) | 0.464014 / 4.805227 (-4.341214) | 0.095506 / 6.500664 (-6.405158) | 0.040172 / 0.075469 (-0.035297) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975618 / 1.841788 (-0.866169) | 12.561067 / 8.074308 (4.486759) | 11.408037 / 10.191392 (1.216645) | 0.130699 / 0.680424 (-0.549725) | 0.016796 / 0.534201 (-0.517405) | 0.271130 / 0.579283 (-0.308153) | 0.283506 / 0.434364 (-0.150857) | 0.304482 / 0.540337 (-0.235856) | 0.413673 / 1.386936 (-0.973263) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-14T17:34:06Z
| 2023-11-14T22:26:04Z
| 2023-11-14T22:19:45Z
|
COLLABORATOR
| null | null | null |
Fix #6412
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6418/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6418/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6418.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6418",
"merged_at": "2023-11-14T22:19:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6418.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6418"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6624
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6624/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6624/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6624/events
|
https://github.com/huggingface/datasets/issues/6624
| 2,103,950,718
|
I_kwDODunzps59Z71-
| 6,624
|
How to download the laion-coco dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15981416?v=4",
"events_url": "https://api.github.com/users/vanpersie32/events{/privacy}",
"followers_url": "https://api.github.com/users/vanpersie32/followers",
"following_url": "https://api.github.com/users/vanpersie32/following{/other_user}",
"gists_url": "https://api.github.com/users/vanpersie32/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vanpersie32",
"id": 15981416,
"login": "vanpersie32",
"node_id": "MDQ6VXNlcjE1OTgxNDE2",
"organizations_url": "https://api.github.com/users/vanpersie32/orgs",
"received_events_url": "https://api.github.com/users/vanpersie32/received_events",
"repos_url": "https://api.github.com/users/vanpersie32/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vanpersie32/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanpersie32/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vanpersie32",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it."
] | 2024-01-28T03:56:05Z
| 2024-02-06T09:43:31Z
| 2024-02-06T09:43:31Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6624/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6624/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4739
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4739/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4739/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4739/events
|
https://github.com/huggingface/datasets/pull/4739
| 1,316,400,915
|
PR_kwDODunzps48BHdE
| 4,739
|
Deprecate metrics
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I mark this as Draft because the deprecated version number needs being updated after the latest release.",
"Perhaps now is the time to also update the `inspect_metric` from `evaluate` with the changes introduced in https://github.com/huggingface/datasets/pull/4433 (cc @lvwerra) ",
"What do you think of including what changes users have to do to switch to `evaluate` in the warning message ?\r\n(basically replace `datasets.load_metric` by `evaluate.load`)\r\n\r\nI think it can help users migrate to `evaluate` and silence the warnings"
] | 2022-07-25T07:35:55Z
| 2022-07-28T11:44:27Z
| 2022-07-28T11:32:16Z
|
MEMBER
| null | null | null |
Deprecate metrics:
- deprecate public functions: `load_metric`, `list_metrics` and `inspect_metric`: docstring and warning
- test deprecation warnings are issues
- deprecate metrics in all docs
- remove mentions to metrics in docs and README
- deprecate internal functions/classes
Maybe we should also stop testing metrics?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4739/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4739/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4739",
"merged_at": "2022-07-28T11:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4739"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7458
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7458/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7458/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7458/events
|
https://github.com/huggingface/datasets/issues/7458
| 2,925,403,528
|
I_kwDODunzps6uXh2I
| 7,458
|
Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23343961?v=4",
"events_url": "https://api.github.com/users/nikita-savelyevv/events{/privacy}",
"followers_url": "https://api.github.com/users/nikita-savelyevv/followers",
"following_url": "https://api.github.com/users/nikita-savelyevv/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita-savelyevv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nikita-savelyevv",
"id": 23343961,
"login": "nikita-savelyevv",
"node_id": "MDQ6VXNlcjIzMzQzOTYx",
"organizations_url": "https://api.github.com/users/nikita-savelyevv/orgs",
"received_events_url": "https://api.github.com/users/nikita-savelyevv/received_events",
"repos_url": "https://api.github.com/users/nikita-savelyevv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nikita-savelyevv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita-savelyevv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nikita-savelyevv",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[
"thanks for reporting, I released 3.4.1 with a fix"
] | 2025-03-17T14:54:02Z
| 2025-03-17T16:02:04Z
| 2025-03-17T15:25:55Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2.
### Steps to reproduce the bug
Steps to reproduce:
```
pip install datastes==3.4.0
python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)"
```
Results in:
```
$ python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)"
Repo card metadata block was not found. Setting CardData to empty.
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 560/560 [00:00<00:00, 2280.24it/s]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/load.py", line 2080, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/builder.py", line 1265, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 49, in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 169, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 496, in map_nested
mapped = [
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 497, in <listcomp>
map_nested(
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 513, in map_nested
mapped = [
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 514, in <listcomp>
_single_map_nested((function, obj, batched, batch_size, types, None, True, None))
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 375, in _single_map_nested
return function(data_struct)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract
raise NotImplementedError(
NotImplementedError: Extraction protocol for TAR archives like 'hf://datasets/laion/filtered-wit@c38ca7464e9934d9a49f88b3f60f5ad63b245465/data/00000.tar' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Example usage:
url = dl_manager.download(url)
tar_archive_iterator = dl_manager.iter_archive(url)
for filename, file in tar_archive_iterator:
...
```
### Expected behavior
Dataset loads successfully.
### Environment info
Ubuntu 20.04.6. Python 3.9. Datasets 3.4.0.
pip freeze:
```
aiohappyeyeballs==2.6.1
aiohttp==3.11.14
aiosignal==1.3.2
async-timeout==5.0.1
attrs==25.3.0
certifi==2025.1.31
charset-normalizer==3.4.1
datasets==3.4.0
dill==0.3.8
filelock==3.18.0
frozenlist==1.5.0
fsspec==2024.12.0
huggingface-hub==0.29.3
idna==3.10
multidict==6.1.0
multiprocess==0.70.16
numpy==2.0.2
packaging==24.2
pandas==2.2.3
propcache==0.3.0
pyarrow==19.0.1
python-dateutil==2.9.0.post0
pytz==2025.1
PyYAML==6.0.2
requests==2.32.3
six==1.17.0
tqdm==4.67.1
typing_extensions==4.12.2
tzdata==2025.1
urllib3==2.3.0
xxhash==3.5.0
yarl==1.18.3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7458/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7458/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5860
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5860/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5860/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5860/events
|
https://github.com/huggingface/datasets/pull/5860
| 1,709,727,460
|
PR_kwDODunzps5QfojD
| 5,860
|
Minor tqdm optim
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006917 / 0.011353 (-0.004436) | 0.004803 / 0.011008 (-0.006205) | 0.097082 / 0.038508 (0.058574) | 0.035105 / 0.023109 (0.011996) | 0.325911 / 0.275898 (0.050013) | 0.371858 / 0.323480 (0.048378) | 0.006451 / 0.007986 (-0.001534) | 0.004421 / 0.004328 (0.000093) | 0.075738 / 0.004250 (0.071487) | 0.053624 / 0.037052 (0.016572) | 0.332661 / 0.258489 (0.074172) | 0.372729 / 0.293841 (0.078888) | 0.028279 / 0.128546 (-0.100267) | 0.009318 / 0.075646 (-0.066328) | 0.328505 / 0.419271 (-0.090766) | 0.066962 / 0.043533 (0.023429) | 0.316863 / 0.255139 (0.061724) | 0.344296 / 0.283200 (0.061096) | 0.120575 / 0.141683 (-0.021108) | 1.457867 / 1.452155 (0.005712) | 1.597361 / 1.492716 (0.104644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296399 / 0.018006 (0.278392) | 0.507196 / 0.000490 (0.506706) | 0.003036 / 0.000200 (0.002836) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028535 / 0.037411 (-0.008876) | 0.110566 / 0.014526 (0.096040) | 0.122078 / 0.176557 (-0.054479) | 0.182926 / 0.737135 (-0.554210) | 0.125546 / 0.296338 (-0.170792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211742) | 4.255608 / 2.077655 (2.177953) | 2.063865 / 1.504120 (0.559745) | 1.867198 / 1.541195 (0.326004) | 2.058236 / 1.468490 (0.589746) | 0.525885 / 4.584777 (-4.058892) | 3.723607 / 3.745712 (-0.022105) | 1.919144 / 5.269862 (-3.350718) | 1.235308 / 4.565676 (-3.330368) | 0.066423 / 0.424275 (-0.357852) | 0.012045 / 0.007607 (0.004438) | 0.528432 / 0.226044 (0.302388) | 5.268723 / 2.268929 (2.999794) | 2.504071 / 55.444624 (-52.940553) | 2.137999 / 6.876477 (-4.738477) | 2.229987 / 2.142072 (0.087914) | 0.641739 / 4.805227 (-4.163488) | 0.142635 / 6.500664 (-6.358029) | 0.065649 / 0.075469 (-0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182710 / 1.841788 (-0.659078) | 15.339777 / 8.074308 (7.265469) | 14.722308 / 10.191392 (4.530916) | 0.145914 / 0.680424 (-0.534510) | 0.017861 / 0.534201 (-0.516340) | 0.393092 / 0.579283 (-0.186191) | 0.431179 / 0.434364 (-0.003185) | 0.485712 / 0.540337 (-0.054625) | 0.602634 / 1.386936 (-0.784302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006792 / 0.011353 (-0.004561) | 0.005118 / 0.011008 (-0.005890) | 0.073440 / 0.038508 (0.034932) | 0.033751 / 0.023109 (0.010642) | 0.389243 / 0.275898 (0.113345) | 0.397083 / 0.323480 (0.073603) | 0.005989 / 0.007986 (-0.001997) | 0.004289 / 0.004328 (-0.000040) | 0.073228 / 0.004250 (0.068977) | 0.053490 / 0.037052 (0.016438) | 0.396070 / 0.258489 (0.137581) | 0.415134 / 0.293841 (0.121293) | 0.028649 / 0.128546 (-0.099897) | 0.009159 / 0.075646 (-0.066487) | 0.080813 / 0.419271 (-0.338458) | 0.048200 / 0.043533 (0.004667) | 0.388009 / 0.255139 (0.132870) | 0.382174 / 0.283200 (0.098975) | 0.107807 / 0.141683 (-0.033876) | 1.467276 / 1.452155 (0.015121) | 1.568091 / 1.492716 (0.075375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328030 / 0.018006 (0.310024) | 0.498058 / 0.000490 (0.497568) | 0.002513 / 0.000200 (0.002313) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029835 / 0.037411 (-0.007576) | 0.113859 / 0.014526 (0.099333) | 0.130813 / 0.176557 (-0.045743) | 0.183646 / 0.737135 (-0.553490) | 0.136561 / 0.296338 (-0.159777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438901 / 0.215209 (0.223692) | 4.376426 / 2.077655 (2.298771) | 2.220932 / 1.504120 (0.716812) | 2.043585 / 1.541195 (0.502390) | 2.161383 / 1.468490 (0.692893) | 0.523224 / 4.584777 (-4.061553) | 3.730589 / 3.745712 (-0.015123) | 1.859602 / 5.269862 (-3.410260) | 1.073415 / 4.565676 (-3.492261) | 0.066363 / 0.424275 (-0.357912) | 0.012491 / 0.007607 (0.004884) | 0.542052 / 0.226044 (0.316008) | 5.426246 / 2.268929 (3.157318) | 2.673884 / 55.444624 (-52.770740) | 2.372611 / 6.876477 (-4.503865) | 2.482216 / 2.142072 (0.340143) | 0.705669 / 4.805227 (-4.099558) | 0.141075 / 6.500664 (-6.359589) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316403 / 1.841788 (-0.525385) | 15.832870 / 8.074308 (7.758562) | 13.307045 / 10.191392 (3.115653) | 0.147258 / 0.680424 (-0.533166) | 0.017966 / 0.534201 (-0.516235) | 0.414396 / 0.579283 (-0.164887) | 0.431801 / 0.434364 (-0.002563) | 0.465483 / 0.540337 (-0.074855) | 0.577850 / 1.386936 (-0.809086) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006368 / 0.011353 (-0.004985) | 0.004274 / 0.011008 (-0.006734) | 0.098799 / 0.038508 (0.060291) | 0.029096 / 0.023109 (0.005986) | 0.308009 / 0.275898 (0.032111) | 0.345701 / 0.323480 (0.022221) | 0.005312 / 0.007986 (-0.002674) | 0.003435 / 0.004328 (-0.000894) | 0.075912 / 0.004250 (0.071662) | 0.041993 / 0.037052 (0.004941) | 0.320075 / 0.258489 (0.061586) | 0.347506 / 0.293841 (0.053665) | 0.025456 / 0.128546 (-0.103091) | 0.008461 / 0.075646 (-0.067185) | 0.322823 / 0.419271 (-0.096448) | 0.044650 / 0.043533 (0.001117) | 0.314118 / 0.255139 (0.058979) | 0.333436 / 0.283200 (0.050237) | 0.093811 / 0.141683 (-0.047871) | 1.464464 / 1.452155 (0.012310) | 1.548098 / 1.492716 (0.055382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015905 / 0.018006 (-0.002101) | 0.427847 / 0.000490 (0.427357) | 0.007600 / 0.000200 (0.007400) | 0.000421 / 0.000054 (0.000366) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012882) | 0.099907 / 0.014526 (0.085381) | 0.107282 / 0.176557 (-0.069275) | 0.168332 / 0.737135 (-0.568804) | 0.109875 / 0.296338 (-0.186464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451064 / 0.215209 (0.235855) | 4.491434 / 2.077655 (2.413779) | 2.253251 / 1.504120 (0.749131) | 2.086740 / 1.541195 (0.545545) | 2.133288 / 1.468490 (0.664798) | 0.558801 / 4.584777 (-4.025976) | 3.463525 / 3.745712 (-0.282187) | 1.747657 / 5.269862 (-3.522205) | 1.005465 / 4.565676 (-3.560211) | 0.068341 / 0.424275 (-0.355934) | 0.012521 / 0.007607 (0.004914) | 0.567002 / 0.226044 (0.340957) | 5.689529 / 2.268929 (3.420601) | 2.700562 / 55.444624 (-52.744062) | 2.384888 / 6.876477 (-4.491589) | 2.503160 / 2.142072 (0.361088) | 0.667107 / 4.805227 (-4.138120) | 0.137253 / 6.500664 (-6.363412) | 0.068300 / 0.075469 (-0.007170) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202916 / 1.841788 (-0.638872) | 14.163393 / 8.074308 (6.089085) | 14.402463 / 10.191392 (4.211071) | 0.145273 / 0.680424 (-0.535151) | 0.016996 / 0.534201 (-0.517205) | 0.363520 / 0.579283 (-0.215763) | 0.421595 / 0.434364 (-0.012769) | 0.438413 / 0.540337 (-0.101925) | 0.508615 / 1.386936 (-0.878321) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004346 / 0.011008 (-0.006662) | 0.076356 / 0.038508 (0.037848) | 0.029370 / 0.023109 (0.006260) | 0.371046 / 0.275898 (0.095148) | 0.398279 / 0.323480 (0.074799) | 0.005258 / 0.007986 (-0.002728) | 0.003528 / 0.004328 (-0.000800) | 0.076787 / 0.004250 (0.072537) | 0.041575 / 0.037052 (0.004522) | 0.362319 / 0.258489 (0.103830) | 0.402134 / 0.293841 (0.108293) | 0.025633 / 0.128546 (-0.102913) | 0.008826 / 0.075646 (-0.066820) | 0.082380 / 0.419271 (-0.336892) | 0.041655 / 0.043533 (-0.001878) | 0.357583 / 0.255139 (0.102444) | 0.383486 / 0.283200 (0.100287) | 0.093682 / 0.141683 (-0.048001) | 1.488522 / 1.452155 (0.036367) | 1.576090 / 1.492716 (0.083373) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185556 / 0.018006 (0.167550) | 0.431345 / 0.000490 (0.430855) | 0.002290 / 0.000200 (0.002090) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026030 / 0.037411 (-0.011382) | 0.102889 / 0.014526 (0.088364) | 0.109541 / 0.176557 (-0.067015) | 0.161050 / 0.737135 (-0.576085) | 0.113525 / 0.296338 (-0.182814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445301 / 0.215209 (0.230092) | 4.437320 / 2.077655 (2.359666) | 2.174181 / 1.504120 (0.670061) | 1.977440 / 1.541195 (0.436245) | 2.036323 / 1.468490 (0.567832) | 0.554227 / 4.584777 (-4.030550) | 3.462746 / 3.745712 (-0.282966) | 1.765257 / 5.269862 (-3.504604) | 1.014515 / 4.565676 (-3.551161) | 0.068391 / 0.424275 (-0.355884) | 0.013154 / 0.007607 (0.005546) | 0.546696 / 0.226044 (0.320652) | 5.490628 / 2.268929 (3.221699) | 2.611947 / 55.444624 (-52.832677) | 2.282659 / 6.876477 (-4.593818) | 2.333972 / 2.142072 (0.191899) | 0.663140 / 4.805227 (-4.142087) | 0.137996 / 6.500664 (-6.362668) | 0.069063 / 0.075469 (-0.006407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332147 / 1.841788 (-0.509641) | 14.781592 / 8.074308 (6.707284) | 13.399190 / 10.191392 (3.207798) | 0.139370 / 0.680424 (-0.541054) | 0.016742 / 0.534201 (-0.517459) | 0.364138 / 0.579283 (-0.215146) | 0.402479 / 0.434364 (-0.031885) | 0.427591 / 0.540337 (-0.112746) | 0.520864 / 1.386936 (-0.866072) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-15T09:49:37Z
| 2023-05-17T18:46:46Z
| 2023-05-17T18:39:35Z
|
MEMBER
| null | null | null |
Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.
On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5860/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5860/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5860",
"merged_at": "2023-05-17T18:39:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5860"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4671/events
|
https://github.com/huggingface/datasets/issues/4671
| 1,300,385,909
|
I_kwDODunzps5NglB1
| 4,671
|
Dataset Viewer issue for wmt16
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @lewtun.\r\n\r\n~We can't load the dataset locally, so I think this is an issue with the loading script (not the viewer).~\r\n\r\n We are investigating...",
"Recently, there was a merged PR related to this dataset:\r\n- #4554\r\n\r\nWe are looking at this...",
"Indeed, the above mentioned PR fixed the loading script (it was not working before).\r\n\r\nI'm forcing the refresh of the Viewer.",
"Please note that the above mentioned PR also made an enhancement in the `datasets` library, required by this loading script. This enhancement will only be available to the Viewer once we make our next release.",
"OK, it's working now.\r\n\r\nhttps://huggingface.co/datasets/wmt16/viewer/ro-en/test\r\n\r\n<img width=\"1434\" alt=\"Capture d’écran 2022-09-08 à 10 15 55\" src=\"https://user-images.githubusercontent.com/1676121/189071665-17d2d149-9b22-42bf-93ac-1a966c3f637a.png\">\r\n",
"Thank you @severo !!"
] | 2022-07-11T08:34:11Z
| 2022-09-13T13:27:02Z
| 2022-09-08T08:16:06Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status code: 400
Exception: NotImplementedError
Message: This is a abstract method
```
Thanks!
### Owner
No
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4671/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4671/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5923
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5923/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5923/events
|
https://github.com/huggingface/datasets/issues/5923
| 1,737,436,227
|
I_kwDODunzps5njyxD
| 5,923
|
Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/71412682?v=4",
"events_url": "https://api.github.com/users/ehuangc/events{/privacy}",
"followers_url": "https://api.github.com/users/ehuangc/followers",
"following_url": "https://api.github.com/users/ehuangc/following{/other_user}",
"gists_url": "https://api.github.com/users/ehuangc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ehuangc",
"id": 71412682,
"login": "ehuangc",
"node_id": "MDQ6VXNlcjcxNDEyNjgy",
"organizations_url": "https://api.github.com/users/ehuangc/orgs",
"received_events_url": "https://api.github.com/users/ehuangc/received_events",
"repos_url": "https://api.github.com/users/ehuangc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ehuangc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehuangc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ehuangc",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Based on https://github.com/rapidsai/cudf/issues/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; print(pyarrow.__file__)\"\r\n```\r\n\r\n\r\n",
"> Based on [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187), this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n> \r\n> Can you please execute the following commands in the terminal and paste the output here?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\n\r\nHere is the output to the first command:\r\n```\r\narrow-cpp 11.0.0 py39h7f74497_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n```\r\nand the second:\r\n```\r\n/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/__init__.py\r\n```\r\nThanks!\r\n\r\n\r\n\r\n",
"after installing pytesseract 0.3.10, I got the above error. FYI ",
"RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\npyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject",
"I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n\r\nDo we need to update dependencies? ",
"Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291",
"For conda with python3.8.16 this solved my problem! thanks!\r\n\r\n> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies? I can work on that if no one else is working on it.\r\n\r\n",
"Thanks for replying. I am not sure about those environments but it seems like pyarrow-12.0.0 does not work for conda with python 3.8.16. \r\n\r\n> Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291\r\n\r\n",
"Got the same error with:\r\n\r\n```\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n\r\npython 3.10.11 h7a1cb2a_2 \r\n\r\ndatasets 2.13.0 pyhd8ed1ab_0 conda-forge\r\n```",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nThis solved the issue for me as well.",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nSolved it for me also",
"> 基于 [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187),这可能意味着您的安装与 不兼容。`pyarrow``datasets`\r\n> \r\n> 您能否在终端中执行以下命令并将输出粘贴到此处?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\n/root/miniconda3/lib/python3.10/site-packages/pyarrow/__init__.py",
"Got the same problem with\r\n\r\narrow-cpp 11.0.0 py310h1fc3239_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\nminiforge3/envs/mlp/lib/python3.10/site-packages/pyarrow/__init__.py\r\n\r\nReverting back to pyarrow 11 solved the problem.\r\n",
"Solved with `pip install pyarrow==11.0.0`",
"I got different. Solved with\r\npip install pyarrow==12.0.1\r\npip install cchardet\r\n\r\nenv:\r\nPython 3.9.16\r\ntransformers 4.32.1",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nThis works for me as well",
"> I got different. Solved with pip install pyarrow==12.0.1 pip install cchardet\r\n> \r\n> env: Python 3.9.16 transformers 4.32.1\r\n\r\nI guess it also depends on the Python version. I got Python 3.11.5 and pyarrow==12.0.0. \r\nIt works! ",
"Hi, if this helps anyone, pip install pyarrow==11.0.0 did not work for me (I'm using Colab) but this worked: \r\n!pip install --extra-index-url=https://pypi.nvidia.com cudf-cu11",
"> Hi, if this helps anyone, pip install pyarrow==11.0.0 did not work for me (I'm using Colab) but this worked: !pip install --extra-index-url=https://pypi.nvidia.com cudf-cu11\r\n\r\nthanks! I met the same problem and your suggestion solved it.",
"(I was doing quiet install so I didn't notice it initially)\r\nI've been loading the same dataset for months on Colab, just now I got this error as well. I think Colab has changed their image recently (I had some errors regarding CUDA previously as well). beware of this and restart runtime if you're doing quite pip installs.\r\nmoreover installing stable version of datasets on pypi gives this:\r\n\r\n```\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nibis-framework 7.1.0 requires pyarrow<15,>=2, but you have pyarrow 15.0.0 which is incompatible.\r\nSuccessfully installed datasets-2.17.0 dill-0.3.8 multiprocess-0.70.16 pyarrow-15.0.0\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n``` \r\n",
"for colab - pip install pyarrow==11.0.0",
"The above methods didn't help me. So I installed an older version: `!pip install datasets==2.16.1`\r\nand `import datasets` worked!!",
"@rasith1998 @PennlaineChu You can avoid this issue by restarting the session after the `datasets` installation (see https://github.com/huggingface/datasets/issues/6661 for more info)\r\n\r\nAlso, we've contacted Google Colab folks to update the default PyArrow installation, so the issue should soon be \"officially\" resolved on their side.",
"> Also, we've contacted Google Colab folks to update the default PyArrow installation, so the issue should soon be \"officially\" resolved on their side.\r\n\r\nThis has been done! Google Colab now pre-installs PyArrow 14.0.2, which makes this issue unlikely to happen, so I'm closing it.",
"I am facing this issue outside of Colab, in a normal Python (3.10.14) environment:\r\n```\r\npyarrow==11.0.0\r\ndatasets=2.20.0\r\ntransformers==4.41.2\r\n```\r\n\r\nWhat can I do to solve it?\r\n\r\nI am somewhat bound to `pyarrow==11.0.0`. Is there a version of `datasets` that supports this?"
] | 2023-06-02T04:16:32Z
| 2024-06-27T10:07:49Z
| 2024-02-25T16:38:03Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5923/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5269
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5269/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5269/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5269/events
|
https://github.com/huggingface/datasets/issues/5269
| 1,456,485,799
|
I_kwDODunzps5W0DWn
| 5,269
|
Shell completions
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4",
"events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}",
"followers_url": "https://api.github.com/users/Freed-Wu/followers",
"following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}",
"gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Freed-Wu",
"id": 32936898,
"login": "Freed-Wu",
"node_id": "MDQ6VXNlcjMyOTM2ODk4",
"organizations_url": "https://api.github.com/users/Freed-Wu/orgs",
"received_events_url": "https://api.github.com/users/Freed-Wu/received_events",
"repos_url": "https://api.github.com/users/Freed-Wu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Freed-Wu",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"I don't think we need completion on the datasets-cli, since we're mainly developing huggingface-cli",
"I see."
] | 2022-11-19T13:48:59Z
| 2022-11-21T15:06:15Z
| 2022-11-21T15:06:14Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too.
### Motivation
See above.
### Your contribution
Maybe.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4",
"events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}",
"followers_url": "https://api.github.com/users/Freed-Wu/followers",
"following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}",
"gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Freed-Wu",
"id": 32936898,
"login": "Freed-Wu",
"node_id": "MDQ6VXNlcjMyOTM2ODk4",
"organizations_url": "https://api.github.com/users/Freed-Wu/orgs",
"received_events_url": "https://api.github.com/users/Freed-Wu/received_events",
"repos_url": "https://api.github.com/users/Freed-Wu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Freed-Wu",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5269/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5269/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7148
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7148/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7148/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7148/events
|
https://github.com/huggingface/datasets/issues/7148
| 2,523,833,413
|
I_kwDODunzps6WbqRF
| 7,148
|
Bug: Error when downloading mteb/mtop_domain
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/77958037?v=4",
"events_url": "https://api.github.com/users/ZiyiXia/events{/privacy}",
"followers_url": "https://api.github.com/users/ZiyiXia/followers",
"following_url": "https://api.github.com/users/ZiyiXia/following{/other_user}",
"gists_url": "https://api.github.com/users/ZiyiXia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZiyiXia",
"id": 77958037,
"login": "ZiyiXia",
"node_id": "MDQ6VXNlcjc3OTU4MDM3",
"organizations_url": "https://api.github.com/users/ZiyiXia/orgs",
"received_events_url": "https://api.github.com/users/ZiyiXia/received_events",
"repos_url": "https://api.github.com/users/ZiyiXia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZiyiXia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZiyiXia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZiyiXia",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```",
"Seems the error is still there",
"I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: data = load_dataset(\"mteb/mtop_domain\", \"en\")\r\n\r\nIn [3]: data\r\nOut[3]: DatasetDict({\r\n train: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 15667\r\n })\r\n validation: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 2235\r\n })\r\n test: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 4386\r\n })\r\n})\r\n```",
"Just solved this by reinstall Huggingface Hub and datasets. Thanks for your help!"
] | 2024-09-13T04:09:39Z
| 2024-09-14T15:11:35Z
| 2024-09-14T15:11:35Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When downloading the dataset "mteb/mtop_domain", ran into the following error:
```
Traceback (most recent call last):
File "/share/project/xzy/test/test_download.py", line 3, in <module>
data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1923, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1896, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1507, in get_module
local_path = self.download_loading_script()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1467, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 211, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 689, in get_from_cache
fsspec_get(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 395, in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 648, in get_file
http_get(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 578, in http_get
raise EnvironmentError(
OSError: Consistency check failed: file should be of size 2191 but has size 2190 ((…)ets/mteb/mtop_domain@main/mtop_domain.py).
We are sorry for the inconvenience. Please retry with `force_download=True`.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
```
Try to download through HF datasets directly but got the same error as above.
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en")
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en", force_download=True)
```
With and without `force_download=True` both ran into the same error.
### Expected behavior
Should download the dataset successfully.
### Environment info
- datasets version: 2.21.0
- huggingface-hub version: 0.24.6
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7148/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7148/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6189
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6189/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6189/events
|
https://github.com/huggingface/datasets/pull/6189
| 1,871,569,855
|
PR_kwDODunzps5ZB8Z9
| 6,189
|
Don't alter input in Features.from_dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003643 / 0.011008 (-0.007365) | 0.080966 / 0.038508 (0.042458) | 0.060538 / 0.023109 (0.037429) | 0.309205 / 0.275898 (0.033307) | 0.351007 / 0.323480 (0.027527) | 0.003592 / 0.007986 (-0.004393) | 0.002880 / 0.004328 (-0.001448) | 0.062957 / 0.004250 (0.058707) | 0.049015 / 0.037052 (0.011963) | 0.309436 / 0.258489 (0.050947) | 0.362695 / 0.293841 (0.068854) | 0.027818 / 0.128546 (-0.100728) | 0.008030 / 0.075646 (-0.067616) | 0.262678 / 0.419271 (-0.156594) | 0.046024 / 0.043533 (0.002491) | 0.316246 / 0.255139 (0.061107) | 0.337454 / 0.283200 (0.054254) | 0.022529 / 0.141683 (-0.119154) | 1.432492 / 1.452155 (-0.019662) | 1.499646 / 1.492716 (0.006929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190931 / 0.018006 (0.172925) | 0.428053 / 0.000490 (0.427564) | 0.002839 / 0.000200 (0.002639) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024042 / 0.037411 (-0.013370) | 0.073952 / 0.014526 (0.059426) | 0.905973 / 0.176557 (0.729417) | 0.177767 / 0.737135 (-0.559368) | 0.125779 / 0.296338 (-0.170559) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398997 / 0.215209 (0.183788) | 3.959575 / 2.077655 (1.881920) | 1.907038 / 1.504120 (0.402918) | 1.732908 / 1.541195 (0.191713) | 1.757038 / 1.468490 (0.288548) | 0.495917 / 4.584777 (-4.088860) | 3.021437 / 3.745712 (-0.724275) | 2.793960 / 5.269862 (-2.475901) | 1.827753 / 4.565676 (-2.737923) | 0.057143 / 0.424275 (-0.367132) | 0.006583 / 0.007607 (-0.001024) | 0.469402 / 0.226044 (0.243357) | 4.685623 / 2.268929 (2.416695) | 2.325200 / 55.444624 (-53.119424) | 1.985559 / 6.876477 (-4.890918) | 2.151208 / 2.142072 (0.009136) | 0.589498 / 4.805227 (-4.215730) | 0.125433 / 6.500664 (-6.375231) | 0.060834 / 0.075469 (-0.014636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228217 / 1.841788 (-0.613571) | 18.076089 / 8.074308 (10.001780) | 13.814460 / 10.191392 (3.623068) | 0.144674 / 0.680424 (-0.535750) | 0.016749 / 0.534201 (-0.517452) | 0.332839 / 0.579283 (-0.246444) | 0.357211 / 0.434364 (-0.077153) | 0.380367 / 0.540337 (-0.159971) | 0.531177 / 1.386936 (-0.855759) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006006 / 0.011353 (-0.005347) | 0.003552 / 0.011008 (-0.007456) | 0.061822 / 0.038508 (0.023313) | 0.057724 / 0.023109 (0.034615) | 0.462326 / 0.275898 (0.186428) | 0.492842 / 0.323480 (0.169362) | 0.004833 / 0.007986 (-0.003152) | 0.002847 / 0.004328 (-0.001481) | 0.062278 / 0.004250 (0.058028) | 0.046754 / 0.037052 (0.009702) | 0.464185 / 0.258489 (0.205696) | 0.496416 / 0.293841 (0.202576) | 0.028949 / 0.128546 (-0.099597) | 0.008038 / 0.075646 (-0.067608) | 0.067572 / 0.419271 (-0.351700) | 0.041176 / 0.043533 (-0.002356) | 0.460047 / 0.255139 (0.204908) | 0.482728 / 0.283200 (0.199528) | 0.020047 / 0.141683 (-0.121635) | 1.455958 / 1.452155 (0.003804) | 1.525730 / 1.492716 (0.033014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283643 / 0.018006 (0.265637) | 0.443046 / 0.000490 (0.442556) | 0.041019 / 0.000200 (0.040819) | 0.000340 / 0.000054 (0.000286) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026229 / 0.037411 (-0.011182) | 0.081498 / 0.014526 (0.066972) | 0.091412 / 0.176557 (-0.085145) | 0.146621 / 0.737135 (-0.590514) | 0.092113 / 0.296338 (-0.204225) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463525 / 0.215209 (0.248315) | 4.629852 / 2.077655 (2.552198) | 2.564831 / 1.504120 (1.060711) | 2.386976 / 1.541195 (0.845781) | 2.457757 / 1.468490 (0.989266) | 0.507317 / 4.584777 (-4.077460) | 3.142418 / 3.745712 (-0.603294) | 2.851642 / 5.269862 (-2.418219) | 1.894444 / 4.565676 (-2.671233) | 0.058495 / 0.424275 (-0.365780) | 0.006453 / 0.007607 (-0.001154) | 0.545363 / 0.226044 (0.319319) | 5.448092 / 2.268929 (3.179164) | 2.996328 / 55.444624 (-52.448296) | 2.664666 / 6.876477 (-4.211811) | 2.832247 / 2.142072 (0.690174) | 0.597631 / 4.805227 (-4.207596) | 0.126101 / 6.500664 (-6.374563) | 0.062573 / 0.075469 (-0.012896) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366502 / 1.841788 (-0.475286) | 18.872990 / 8.074308 (10.798682) | 14.892114 / 10.191392 (4.700722) | 0.146668 / 0.680424 (-0.533756) | 0.017876 / 0.534201 (-0.516325) | 0.338490 / 0.579283 (-0.240793) | 0.357471 / 0.434364 (-0.076893) | 0.398730 / 0.540337 (-0.141608) | 0.542464 / 1.386936 (-0.844472) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009132 / 0.011353 (-0.002221) | 0.005796 / 0.011008 (-0.005212) | 0.119495 / 0.038508 (0.080987) | 0.081708 / 0.023109 (0.058599) | 0.432940 / 0.275898 (0.157042) | 0.466793 / 0.323480 (0.143313) | 0.006464 / 0.007986 (-0.001521) | 0.004308 / 0.004328 (-0.000021) | 0.086344 / 0.004250 (0.082093) | 0.065987 / 0.037052 (0.028935) | 0.445213 / 0.258489 (0.186724) | 0.482405 / 0.293841 (0.188564) | 0.053553 / 0.128546 (-0.074993) | 0.015320 / 0.075646 (-0.060326) | 0.455669 / 0.419271 (0.036397) | 0.071619 / 0.043533 (0.028086) | 0.434843 / 0.255139 (0.179704) | 0.503224 / 0.283200 (0.220025) | 0.038280 / 0.141683 (-0.103403) | 1.901877 / 1.452155 (0.449722) | 2.040406 / 1.492716 (0.547690) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268275 / 0.018006 (0.250269) | 0.622795 / 0.000490 (0.622305) | 0.004572 / 0.000200 (0.004372) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032514 / 0.037411 (-0.004898) | 0.100619 / 0.014526 (0.086093) | 0.118407 / 0.176557 (-0.058149) | 0.190311 / 0.737135 (-0.546824) | 0.117160 / 0.296338 (-0.179178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629836 / 0.215209 (0.414627) | 6.236124 / 2.077655 (4.158470) | 2.750775 / 1.504120 (1.246655) | 2.380111 / 1.541195 (0.838916) | 2.487279 / 1.468490 (1.018789) | 0.849568 / 4.584777 (-3.735209) | 5.571308 / 3.745712 (1.825596) | 4.934114 / 5.269862 (-0.335747) | 3.205478 / 4.565676 (-1.360198) | 0.104804 / 0.424275 (-0.319471) | 0.009856 / 0.007607 (0.002248) | 0.753352 / 0.226044 (0.527308) | 7.523482 / 2.268929 (5.254554) | 3.660088 / 55.444624 (-51.784537) | 2.726493 / 6.876477 (-4.149984) | 3.011344 / 2.142072 (0.869271) | 1.093410 / 4.805227 (-3.711817) | 0.229758 / 6.500664 (-6.270906) | 0.081516 / 0.075469 (0.006047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.700199 / 1.841788 (-0.141588) | 25.238736 / 8.074308 (17.164428) | 23.188131 / 10.191392 (12.996739) | 0.257862 / 0.680424 (-0.422562) | 0.028885 / 0.534201 (-0.505316) | 0.510693 / 0.579283 (-0.068590) | 0.648474 / 0.434364 (0.214110) | 0.576314 / 0.540337 (0.035976) | 0.800606 / 1.386936 (-0.586330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009426 / 0.011353 (-0.001927) | 0.006205 / 0.011008 (-0.004803) | 0.083947 / 0.038508 (0.045438) | 0.089164 / 0.023109 (0.066055) | 0.540500 / 0.275898 (0.264602) | 0.578825 / 0.323480 (0.255345) | 0.006792 / 0.007986 (-0.001194) | 0.005125 / 0.004328 (0.000797) | 0.083284 / 0.004250 (0.079034) | 0.067539 / 0.037052 (0.030487) | 0.544330 / 0.258489 (0.285841) | 0.593836 / 0.293841 (0.299995) | 0.050647 / 0.128546 (-0.077899) | 0.014688 / 0.075646 (-0.060959) | 0.095977 / 0.419271 (-0.323295) | 0.062326 / 0.043533 (0.018793) | 0.536096 / 0.255139 (0.280957) | 0.578691 / 0.283200 (0.295492) | 0.035488 / 0.141683 (-0.106194) | 1.911145 / 1.452155 (0.458990) | 1.977647 / 1.492716 (0.484931) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368365 / 0.018006 (0.350359) | 0.609836 / 0.000490 (0.609346) | 0.054720 / 0.000200 (0.054520) | 0.000465 / 0.000054 (0.000411) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036057 / 0.037411 (-0.001355) | 0.126434 / 0.014526 (0.111908) | 0.124740 / 0.176557 (-0.051817) | 0.198907 / 0.737135 (-0.538228) | 0.138201 / 0.296338 (-0.158137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684814 / 0.215209 (0.469605) | 6.738182 / 2.077655 (4.660527) | 3.231054 / 1.504120 (1.726934) | 2.889550 / 1.541195 (1.348355) | 2.933985 / 1.468490 (1.465495) | 0.867176 / 4.584777 (-3.717601) | 5.465475 / 3.745712 (1.719763) | 4.928370 / 5.269862 (-0.341492) | 3.126382 / 4.565676 (-1.439294) | 0.129673 / 0.424275 (-0.294603) | 0.009755 / 0.007607 (0.002148) | 0.797860 / 0.226044 (0.571816) | 8.003178 / 2.268929 (5.734250) | 4.081658 / 55.444624 (-51.362966) | 3.303837 / 6.876477 (-3.572640) | 3.574577 / 2.142072 (1.432505) | 1.064674 / 4.805227 (-3.740554) | 0.232894 / 6.500664 (-6.267770) | 0.082298 / 0.075469 (0.006829) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.858701 / 1.841788 (0.016913) | 25.839794 / 8.074308 (17.765485) | 24.291425 / 10.191392 (14.100033) | 0.250181 / 0.680424 (-0.430243) | 0.034479 / 0.534201 (-0.499722) | 0.540754 / 0.579283 (-0.038529) | 0.615996 / 0.434364 (0.181632) | 0.631499 / 0.540337 (0.091161) | 0.838719 / 1.386936 (-0.548217) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-29T12:29:47Z
| 2023-08-29T13:04:59Z
| 2023-08-29T12:52:48Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6189/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6189.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6189",
"merged_at": "2023-08-29T12:52:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6189.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6189"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6605
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6605/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6605/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6605/events
|
https://github.com/huggingface/datasets/issues/6605
| 2,090,188,376
|
I_kwDODunzps58lb5Y
| 6,605
|
ELI5 no longer available, but referenced in example code
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/81480344?v=4",
"events_url": "https://api.github.com/users/drdsgvo/events{/privacy}",
"followers_url": "https://api.github.com/users/drdsgvo/followers",
"following_url": "https://api.github.com/users/drdsgvo/following{/other_user}",
"gists_url": "https://api.github.com/users/drdsgvo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/drdsgvo",
"id": 81480344,
"login": "drdsgvo",
"node_id": "MDQ6VXNlcjgxNDgwMzQ0",
"organizations_url": "https://api.github.com/users/drdsgvo/orgs",
"received_events_url": "https://api.github.com/users/drdsgvo/received_events",
"repos_url": "https://api.github.com/users/drdsgvo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/drdsgvo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drdsgvo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/drdsgvo",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Addressed in https://github.com/huggingface/transformers/pull/28715."
] | 2024-01-19T10:21:52Z
| 2024-02-01T17:58:23Z
| 2024-02-01T17:58:22Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Here, an example code is given:
https://huggingface.co/docs/transformers/tasks/language_modeling
This code + article references the ELI5 dataset.
ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5
"Defunct: Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.
Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable.
"
Please change the example code to use a different dataset.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6605/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6605/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4655
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4655/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4655/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4655/events
|
https://github.com/huggingface/datasets/issues/4655
| 1,296,720,896
|
I_kwDODunzps5NSmQA
| 4,655
|
Simple Wikipedia
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/simple-wiki)."
] | 2022-07-07T02:51:26Z
| 2022-07-14T02:16:33Z
| 2022-07-14T02:16:33Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Adding a Dataset
- **Name:** *Simple Wikipedia*
- **Description:** *Two different versions of the data set now exist. Both were generated by aligning Simple English Wikipedia and English Wikipedia. A complete description of the extraction process can be found in "Simple English Wikipedia: A New Simplification Task", William Coster and David Kauchak (2011).*
- **Paper:** *https://aclanthology.org/P11-2117/*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4655/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4655/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6377
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6377/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6377/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6377/events
|
https://github.com/huggingface/datasets/issues/6377
| 1,973,937,612
|
I_kwDODunzps51p-XM
| 6,377
|
Support pyarrow 14.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-11-02T10:22:08Z
| 2023-11-02T15:15:45Z
| 2023-11-02T15:15:45Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Support pyarrow 14.0.0 by fixing the root cause of:
- #6374
and revert:
- #6375
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6377/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6377/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5660
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5660/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5660/events
|
https://github.com/huggingface/datasets/issues/5660
| 1,635,543,646
|
I_kwDODunzps5hfGpe
| 5,660
|
integration with imbalanced-learn
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30216?v=4",
"events_url": "https://api.github.com/users/tansaku/events{/privacy}",
"followers_url": "https://api.github.com/users/tansaku/followers",
"following_url": "https://api.github.com/users/tansaku/following{/other_user}",
"gists_url": "https://api.github.com/users/tansaku/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tansaku",
"id": 30216,
"login": "tansaku",
"node_id": "MDQ6VXNlcjMwMjE2",
"organizations_url": "https://api.github.com/users/tansaku/orgs",
"received_events_url": "https://api.github.com/users/tansaku/received_events",
"repos_url": "https://api.github.com/users/tansaku/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tansaku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tansaku/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tansaku",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] |
closed
| false
| null |
[] | null |
[
"You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), you just need to pass the list of rows ids and labels:\r\n\r\n```python\r\nrow_indices = list(range(len(dataset)))\r\nresampled_row_indices, _ = make_imbalance(\r\n row_indices,\r\n dataset[\"label\"],\r\n sampling_strategy={0: 25, 1: 50, 2: 50},\r\n random_state=RANDOM_STATE,\r\n)\r\n\r\nresampled_dataset = dataset.select(resampled_row_indices)\r\n```"
] | 2023-03-22T11:05:17Z
| 2023-07-06T18:10:15Z
| 2023-07-06T18:10:15Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets?
### Motivation
I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress.
### Your contribution
If I can get this working myself I can submit a PR with example code to go in the docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5660/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5660/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6452
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6452/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6452/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6452/events
|
https://github.com/huggingface/datasets/pull/6452
| 2,011,632,708
|
PR_kwDODunzps5gZ5oe
| 6,452
|
Praveen_repo_pull_req
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/151713216?v=4",
"events_url": "https://api.github.com/users/Praveenhh/events{/privacy}",
"followers_url": "https://api.github.com/users/Praveenhh/followers",
"following_url": "https://api.github.com/users/Praveenhh/following{/other_user}",
"gists_url": "https://api.github.com/users/Praveenhh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Praveenhh",
"id": 151713216,
"login": "Praveenhh",
"node_id": "U_kgDOCQr1wA",
"organizations_url": "https://api.github.com/users/Praveenhh/orgs",
"received_events_url": "https://api.github.com/users/Praveenhh/received_events",
"repos_url": "https://api.github.com/users/Praveenhh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Praveenhh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Praveenhh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Praveenhh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-11-27T07:07:50Z
| 2023-11-27T09:28:00Z
| 2023-11-27T09:28:00Z
|
NONE
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6452/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6452/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6452.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6452",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6452.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6452"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4953
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4953/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4953/events
|
https://github.com/huggingface/datasets/issues/4953
| 1,366,356,514
|
I_kwDODunzps5RcPIi
| 4,953
|
CI test of TensorFlow is failing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-09-08T13:39:29Z
| 2022-09-08T15:14:45Z
| 2022-09-08T15:14:45Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:
```
Details:
```
_________________________ TempSeedTest.test_tensorflow _________________________
[gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python
self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow>
@require_tf
def test_tensorflow(self):
import tensorflow as tf
from tensorflow.keras import layers
def gen_random_output():
model = layers.Dense(2)
x = tf.random.uniform((1, 3))
return model(x).numpy()
with temp_seed(42, set_tensorflow=True):
out1 = gen_random_output()
with temp_seed(42, set_tensorflow=True):
out2 = gen_random_output()
out3 = gen_random_output()
> np.testing.assert_equal(out1, out2)
E AssertionError:
E Arrays are not equal
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 0.84619296
E Max relative difference: 16.083529
E x: array([[-0.793581, 0.333286]], dtype=float32)
E y: array([[0.052612, 0.539708]], dtype=float32)
tests/test_py_utils.py:149: AssertionError
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4953/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6530
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6530/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6530/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6530/events
|
https://github.com/huggingface/datasets/issues/6530
| 2,054,817,609
|
I_kwDODunzps56egdJ
| 6,530
|
Impossible to save a mapped dataset to disk
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I solved it with `train_dataset.with_format(None)`\r\nBut then faced some more issues (which i later solved too).\r\n\r\nHuggingface does not seem to care, so I do. Here is an updated training script which saves a pre-processed (mapped) dataset to your local directory if you specify `--save_precomputed_data_dir=DIR_NAME`. Then use `--train_precomputed_data_dir` with the same dir to load it instead of `--dataset_name`.\r\n\r\n[Proper SDXL trainer code](https://github.com/kopyl/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)\r\n[Notebook for pre-computing a dataset and saving locally](https://colab.research.google.com/drive/17Yo08hePx-NlHs99RecdeiNc8CQg4O7l?usp=sharing)\r\n\r\nExample:\r\n\r\n1st run (nothing is pre-computed yet):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --dataset_name=lambdalabs/pokemon-blip-captions \\\r\n --save_precomputed_data_dir=\"test-5\"\r\n```\r\n\r\n2nd run (the pre-computed dataset is saved to `test-5` directory):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --train_precomputed_data_dir test-5\r\n```\r\n\r\nThis way when you're gonna be using your pre-computed dataset you don't need to worry about re-mapping your dataset when you change an argument for your trainer script"
] | 2023-12-23T15:18:27Z
| 2023-12-24T09:40:30Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After I do the mapping like this:
```
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True)
train_dataset = train_dataset.map(
compute_vae_encodings_fn,
batched=True,
batch_size=16,
)
```
and try to save it like this:
`train_dataset.save_to_disk("test")`
i get this error ([full traceback](https://pastebin.com/kq3vt739)):
```
TypeError: Object of type function is not JSON serializable
The format kwargs must be JSON serializable, but key 'transform' isn't.
```
But what is interesting is that pushing to hub works like that:
`train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)`
Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset
### Steps to reproduce the bug
Here is the self-contained notebook:
https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing
### Expected behavior
It should be easily saved to disk
### Environment info
NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2.
[pip freeze](https://pastebin.com/QTNb6iru)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6530/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6530/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5572/events
|
https://github.com/huggingface/datasets/issues/5572
| 1,597,257,624
|
I_kwDODunzps5fNDeY
| 5,572
|
Datasets 2.10.0 does not reuse the dataset cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"events_url": "https://api.github.com/users/lsb/events{/privacy}",
"followers_url": "https://api.github.com/users/lsb/followers",
"following_url": "https://api.github.com/users/lsb/following{/other_user}",
"gists_url": "https://api.github.com/users/lsb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lsb",
"id": 45281,
"login": "lsb",
"node_id": "MDQ6VXNlcjQ1Mjgx",
"organizations_url": "https://api.github.com/users/lsb/orgs",
"received_events_url": "https://api.github.com/users/lsb/received_events",
"repos_url": "https://api.github.com/users/lsb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lsb",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-02-23T17:28:11Z
| 2023-02-23T18:03:55Z
| 2023-02-23T18:03:55Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist.
Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of:
```
File ~/jupyterlab/.direnv/python-3.9.6/lib/python3.9/site-packages/datasets/load.py:1174, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1165 except Exception as e: # noqa: catch any exception of hf_hub and consider that the dataset doesn't exist
1166 if isinstance(
1167 e,
1168 (
(...)
1172 ),
1173 ):
-> 1174 raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})")
1175 elif "404" in str(e):
1176 msg = f"Dataset '{path}' doesn't exist on the Hub"
ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
This has been around since at least v2.0.
### Steps to reproduce the bug
```
from datasets import load_dataset
import numpy as np
tenk = load_dataset("lsb/tenk") # ten thousand integers
print(np.average(tenk['train']['a'])) # prints 4999.5
### now disconnect your internet
tenk_too = load_dataset("lsb/tenk", download_mode="reuse_dataset_if_exists")
# Raises ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
### Expected behavior
I expected that I would be able to reuse the dataset I just downloaded.
### Environment info
- `datasets` version: 2.10.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"events_url": "https://api.github.com/users/lsb/events{/privacy}",
"followers_url": "https://api.github.com/users/lsb/followers",
"following_url": "https://api.github.com/users/lsb/following{/other_user}",
"gists_url": "https://api.github.com/users/lsb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lsb",
"id": 45281,
"login": "lsb",
"node_id": "MDQ6VXNlcjQ1Mjgx",
"organizations_url": "https://api.github.com/users/lsb/orgs",
"received_events_url": "https://api.github.com/users/lsb/received_events",
"repos_url": "https://api.github.com/users/lsb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lsb",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5572/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5572/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6243
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6243/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6243/events
|
https://github.com/huggingface/datasets/pull/6243
| 1,898,532,784
|
PR_kwDODunzps5aclIy
| 6,243
|
Fix cast from fixed size list to variable size list
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006784 / 0.011353 (-0.004569) | 0.004051 / 0.011008 (-0.006957) | 0.083790 / 0.038508 (0.045282) | 0.081219 / 0.023109 (0.058110) | 0.313195 / 0.275898 (0.037297) | 0.336954 / 0.323480 (0.013475) | 0.004324 / 0.007986 (-0.003662) | 0.004516 / 0.004328 (0.000188) | 0.065051 / 0.004250 (0.060801) | 0.057647 / 0.037052 (0.020595) | 0.316675 / 0.258489 (0.058186) | 0.357936 / 0.293841 (0.064095) | 0.030980 / 0.128546 (-0.097566) | 0.008844 / 0.075646 (-0.066802) | 0.287027 / 0.419271 (-0.132245) | 0.052130 / 0.043533 (0.008597) | 0.308125 / 0.255139 (0.052986) | 0.337345 / 0.283200 (0.054145) | 0.025781 / 0.141683 (-0.115902) | 1.466161 / 1.452155 (0.014006) | 1.565824 / 1.492716 (0.073108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299112 / 0.018006 (0.281106) | 0.640520 / 0.000490 (0.640030) | 0.008846 / 0.000200 (0.008647) | 0.000273 / 0.000054 (0.000219) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029853 / 0.037411 (-0.007559) | 0.081697 / 0.014526 (0.067172) | 0.099110 / 0.176557 (-0.077447) | 0.155864 / 0.737135 (-0.581271) | 0.098749 / 0.296338 (-0.197590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385722 / 0.215209 (0.170512) | 3.851490 / 2.077655 (1.773835) | 1.851995 / 1.504120 (0.347875) | 1.660398 / 1.541195 (0.119204) | 1.769370 / 1.468490 (0.300879) | 0.481523 / 4.584777 (-4.103254) | 3.550449 / 3.745712 (-0.195263) | 3.424782 / 5.269862 (-1.845079) | 2.106470 / 4.565676 (-2.459206) | 0.056500 / 0.424275 (-0.367775) | 0.007891 / 0.007607 (0.000284) | 0.465564 / 0.226044 (0.239520) | 4.662892 / 2.268929 (2.393964) | 2.305424 / 55.444624 (-53.139201) | 1.980524 / 6.876477 (-4.895953) | 2.218423 / 2.142072 (0.076350) | 0.584662 / 4.805227 (-4.220565) | 0.132325 / 6.500664 (-6.368340) | 0.060773 / 0.075469 (-0.014696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254261 / 1.841788 (-0.587527) | 19.479805 / 8.074308 (11.405497) | 14.222687 / 10.191392 (4.031295) | 0.149829 / 0.680424 (-0.530595) | 0.018630 / 0.534201 (-0.515571) | 0.395284 / 0.579283 (-0.183999) | 0.413385 / 0.434364 (-0.020978) | 0.462931 / 0.540337 (-0.077406) | 0.645359 / 1.386936 (-0.741577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004306 / 0.011008 (-0.006702) | 0.065213 / 0.038508 (0.026705) | 0.082442 / 0.023109 (0.059332) | 0.411294 / 0.275898 (0.135396) | 0.452176 / 0.323480 (0.128696) | 0.005802 / 0.007986 (-0.002183) | 0.003556 / 0.004328 (-0.000772) | 0.066163 / 0.004250 (0.061913) | 0.060680 / 0.037052 (0.023628) | 0.416975 / 0.258489 (0.158486) | 0.456353 / 0.293841 (0.162512) | 0.033584 / 0.128546 (-0.094963) | 0.008687 / 0.075646 (-0.066959) | 0.071300 / 0.419271 (-0.347972) | 0.049382 / 0.043533 (0.005849) | 0.409329 / 0.255139 (0.154190) | 0.434829 / 0.283200 (0.151629) | 0.022966 / 0.141683 (-0.118716) | 1.493847 / 1.452155 (0.041692) | 1.582372 / 1.492716 (0.089656) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280578 / 0.018006 (0.262572) | 0.538122 / 0.000490 (0.537632) | 0.004515 / 0.000200 (0.004315) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033383 / 0.037411 (-0.004028) | 0.093426 / 0.014526 (0.078901) | 0.109314 / 0.176557 (-0.067242) | 0.162349 / 0.737135 (-0.574786) | 0.109849 / 0.296338 (-0.186490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431073 / 0.215209 (0.215864) | 4.311942 / 2.077655 (2.234287) | 2.291170 / 1.504120 (0.787051) | 2.132266 / 1.541195 (0.591072) | 2.236526 / 1.468490 (0.768036) | 0.492001 / 4.584777 (-4.092776) | 3.523013 / 3.745712 (-0.222699) | 3.413481 / 5.269862 (-1.856381) | 2.112979 / 4.565676 (-2.452698) | 0.058654 / 0.424275 (-0.365621) | 0.007729 / 0.007607 (0.000121) | 0.512027 / 0.226044 (0.285982) | 5.125264 / 2.268929 (2.856336) | 2.836281 / 55.444624 (-52.608344) | 2.447253 / 6.876477 (-4.429224) | 2.711908 / 2.142072 (0.569835) | 0.592598 / 4.805227 (-4.212629) | 0.134837 / 6.500664 (-6.365827) | 0.059813 / 0.075469 (-0.015656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373464 / 1.841788 (-0.468323) | 20.548983 / 8.074308 (12.474675) | 14.799833 / 10.191392 (4.608441) | 0.168601 / 0.680424 (-0.511823) | 0.020358 / 0.534201 (-0.513843) | 0.398790 / 0.579283 (-0.180494) | 0.416921 / 0.434364 (-0.017443) | 0.480542 / 0.540337 (-0.059795) | 0.645062 / 1.386936 (-0.741874) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008616 / 0.011353 (-0.002737) | 0.004957 / 0.011008 (-0.006051) | 0.102629 / 0.038508 (0.064121) | 0.080492 / 0.023109 (0.057383) | 0.461817 / 0.275898 (0.185919) | 0.487964 / 0.323480 (0.164484) | 0.006336 / 0.007986 (-0.001649) | 0.004607 / 0.004328 (0.000278) | 0.074311 / 0.004250 (0.070061) | 0.060368 / 0.037052 (0.023315) | 0.458076 / 0.258489 (0.199587) | 0.493028 / 0.293841 (0.199187) | 0.044153 / 0.128546 (-0.084394) | 0.014066 / 0.075646 (-0.061581) | 0.369848 / 0.419271 (-0.049424) | 0.061690 / 0.043533 (0.018157) | 0.439728 / 0.255139 (0.184590) | 0.484706 / 0.283200 (0.201506) | 0.034657 / 0.141683 (-0.107026) | 1.710591 / 1.452155 (0.258437) | 1.900225 / 1.492716 (0.407509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.308837 / 0.018006 (0.290831) | 0.579561 / 0.000490 (0.579072) | 0.010163 / 0.000200 (0.009963) | 0.000613 / 0.000054 (0.000558) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028108 / 0.037411 (-0.009303) | 0.085072 / 0.014526 (0.070546) | 0.103375 / 0.176557 (-0.073182) | 0.173765 / 0.737135 (-0.563371) | 0.102460 / 0.296338 (-0.193879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602642 / 0.215209 (0.387433) | 5.582537 / 2.077655 (3.504882) | 2.405553 / 1.504120 (0.901434) | 2.057298 / 1.541195 (0.516103) | 2.223787 / 1.468490 (0.755297) | 0.846138 / 4.584777 (-3.738638) | 5.290306 / 3.745712 (1.544594) | 4.836066 / 5.269862 (-0.433795) | 2.951901 / 4.565676 (-1.613775) | 0.099432 / 0.424275 (-0.324843) | 0.009198 / 0.007607 (0.001591) | 0.731370 / 0.226044 (0.505325) | 6.663026 / 2.268929 (4.394098) | 3.200932 / 55.444624 (-52.243692) | 2.486654 / 6.876477 (-4.389823) | 2.833195 / 2.142072 (0.691123) | 0.989481 / 4.805227 (-3.815746) | 0.205176 / 6.500664 (-6.295488) | 0.073760 / 0.075469 (-0.001709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745494 / 1.841788 (-0.096294) | 24.649294 / 8.074308 (16.574986) | 22.312182 / 10.191392 (12.120790) | 0.245207 / 0.680424 (-0.435217) | 0.031971 / 0.534201 (-0.502230) | 0.495179 / 0.579283 (-0.084104) | 0.603233 / 0.434364 (0.168869) | 0.560906 / 0.540337 (0.020569) | 0.788292 / 1.386936 (-0.598644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.011353 (-0.002431) | 0.005203 / 0.011008 (-0.005805) | 0.074414 / 0.038508 (0.035906) | 0.077552 / 0.023109 (0.054443) | 0.547217 / 0.275898 (0.271319) | 0.625298 / 0.323480 (0.301818) | 0.006135 / 0.007986 (-0.001851) | 0.004163 / 0.004328 (-0.000165) | 0.078014 / 0.004250 (0.073764) | 0.064484 / 0.037052 (0.027431) | 0.562356 / 0.258489 (0.303867) | 0.643613 / 0.293841 (0.349772) | 0.050155 / 0.128546 (-0.078391) | 0.013665 / 0.075646 (-0.061981) | 0.090224 / 0.419271 (-0.329048) | 0.063852 / 0.043533 (0.020319) | 0.560914 / 0.255139 (0.305775) | 0.591531 / 0.283200 (0.308331) | 0.036491 / 0.141683 (-0.105192) | 1.670898 / 1.452155 (0.218743) | 1.783924 / 1.492716 (0.291208) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.312764 / 0.018006 (0.294758) | 0.611116 / 0.000490 (0.610626) | 0.006367 / 0.000200 (0.006167) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033967 / 0.037411 (-0.003445) | 0.101550 / 0.014526 (0.087025) | 0.116953 / 0.176557 (-0.059604) | 0.180061 / 0.737135 (-0.557075) | 0.115220 / 0.296338 (-0.181118) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.642110 / 0.215209 (0.426901) | 6.361381 / 2.077655 (4.283727) | 2.948175 / 1.504120 (1.444055) | 2.633935 / 1.541195 (1.092740) | 2.822150 / 1.468490 (1.353660) | 0.931412 / 4.584777 (-3.653365) | 5.428540 / 3.745712 (1.682828) | 4.672920 / 5.269862 (-0.596941) | 3.102046 / 4.565676 (-1.463630) | 0.100825 / 0.424275 (-0.323450) | 0.009464 / 0.007607 (0.001857) | 0.774102 / 0.226044 (0.548058) | 7.715003 / 2.268929 (5.446074) | 3.987807 / 55.444624 (-51.456817) | 3.089129 / 6.876477 (-3.787347) | 3.333247 / 2.142072 (1.191174) | 1.012427 / 4.805227 (-3.792800) | 0.200662 / 6.500664 (-6.300002) | 0.072422 / 0.075469 (-0.003047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.680364 / 1.841788 (-0.161424) | 24.484576 / 8.074308 (16.410268) | 21.920990 / 10.191392 (11.729598) | 0.218604 / 0.680424 (-0.461820) | 0.035818 / 0.534201 (-0.498383) | 0.470648 / 0.579283 (-0.108635) | 0.585108 / 0.434364 (0.150744) | 0.539152 / 0.540337 (-0.001185) | 0.763999 / 1.386936 (-0.622937) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006304 / 0.011353 (-0.005049) | 0.003884 / 0.011008 (-0.007125) | 0.084847 / 0.038508 (0.046339) | 0.069372 / 0.023109 (0.046263) | 0.318876 / 0.275898 (0.042978) | 0.344733 / 0.323480 (0.021253) | 0.005139 / 0.007986 (-0.002847) | 0.003203 / 0.004328 (-0.001125) | 0.065758 / 0.004250 (0.061507) | 0.054189 / 0.037052 (0.017137) | 0.317475 / 0.258489 (0.058986) | 0.359310 / 0.293841 (0.065469) | 0.030639 / 0.128546 (-0.097908) | 0.008657 / 0.075646 (-0.066989) | 0.289127 / 0.419271 (-0.130144) | 0.052344 / 0.043533 (0.008811) | 0.316122 / 0.255139 (0.060983) | 0.338339 / 0.283200 (0.055140) | 0.022677 / 0.141683 (-0.119006) | 1.551629 / 1.452155 (0.099474) | 1.617917 / 1.492716 (0.125201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231067 / 0.018006 (0.213061) | 0.450559 / 0.000490 (0.450070) | 0.008484 / 0.000200 (0.008284) | 0.000234 / 0.000054 (0.000179) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.081560 / 0.014526 (0.067034) | 0.094162 / 0.176557 (-0.082395) | 0.148583 / 0.737135 (-0.588552) | 0.093596 / 0.296338 (-0.202742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388616 / 0.215209 (0.173407) | 3.874905 / 2.077655 (1.797251) | 1.915845 / 1.504120 (0.411725) | 1.746410 / 1.541195 (0.205215) | 1.828789 / 1.468490 (0.360299) | 0.483270 / 4.584777 (-4.101506) | 3.489157 / 3.745712 (-0.256555) | 3.190086 / 5.269862 (-2.079776) | 1.978023 / 4.565676 (-2.587653) | 0.056290 / 0.424275 (-0.367985) | 0.007585 / 0.007607 (-0.000022) | 0.467051 / 0.226044 (0.241007) | 4.665971 / 2.268929 (2.397043) | 2.418550 / 55.444624 (-53.026075) | 2.048338 / 6.876477 (-4.828139) | 2.225275 / 2.142072 (0.083203) | 0.576601 / 4.805227 (-4.228626) | 0.131960 / 6.500664 (-6.368704) | 0.060177 / 0.075469 (-0.015292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249797 / 1.841788 (-0.591991) | 18.552939 / 8.074308 (10.478631) | 14.016616 / 10.191392 (3.825224) | 0.162869 / 0.680424 (-0.517555) | 0.018105 / 0.534201 (-0.516096) | 0.394838 / 0.579283 (-0.184445) | 0.403378 / 0.434364 (-0.030986) | 0.460931 / 0.540337 (-0.079407) | 0.637365 / 1.386936 (-0.749571) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006497 / 0.011353 (-0.004856) | 0.003928 / 0.011008 (-0.007080) | 0.063958 / 0.038508 (0.025450) | 0.069609 / 0.023109 (0.046500) | 0.401599 / 0.275898 (0.125701) | 0.428128 / 0.323480 (0.104648) | 0.005296 / 0.007986 (-0.002689) | 0.003332 / 0.004328 (-0.000996) | 0.063903 / 0.004250 (0.059652) | 0.056303 / 0.037052 (0.019250) | 0.400704 / 0.258489 (0.142214) | 0.435982 / 0.293841 (0.142141) | 0.032434 / 0.128546 (-0.096112) | 0.008570 / 0.075646 (-0.067077) | 0.070788 / 0.419271 (-0.348483) | 0.048252 / 0.043533 (0.004719) | 0.403269 / 0.255139 (0.148130) | 0.419796 / 0.283200 (0.136596) | 0.022598 / 0.141683 (-0.119085) | 1.481627 / 1.452155 (0.029472) | 1.578388 / 1.492716 (0.085672) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224552 / 0.018006 (0.206546) | 0.444059 / 0.000490 (0.443570) | 0.003757 / 0.000200 (0.003557) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032173 / 0.037411 (-0.005239) | 0.092562 / 0.014526 (0.078036) | 0.104972 / 0.176557 (-0.071584) | 0.156467 / 0.737135 (-0.580669) | 0.104274 / 0.296338 (-0.192065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441693 / 0.215209 (0.226484) | 4.400217 / 2.077655 (2.322562) | 2.393862 / 1.504120 (0.889742) | 2.281178 / 1.541195 (0.739983) | 2.339895 / 1.468490 (0.871405) | 0.488734 / 4.584777 (-4.096043) | 3.523352 / 3.745712 (-0.222360) | 3.216761 / 5.269862 (-2.053101) | 2.007553 / 4.565676 (-2.558123) | 0.058050 / 0.424275 (-0.366225) | 0.007566 / 0.007607 (-0.000041) | 0.515439 / 0.226044 (0.289394) | 5.155086 / 2.268929 (2.886157) | 2.864958 / 55.444624 (-52.579666) | 2.592460 / 6.876477 (-4.284016) | 2.800449 / 2.142072 (0.658376) | 0.588441 / 4.805227 (-4.216786) | 0.131589 / 6.500664 (-6.369075) | 0.059075 / 0.075469 (-0.016394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353889 / 1.841788 (-0.487898) | 18.938285 / 8.074308 (10.863977) | 14.937141 / 10.191392 (4.745749) | 0.168811 / 0.680424 (-0.511613) | 0.020118 / 0.534201 (-0.514083) | 0.394791 / 0.579283 (-0.184492) | 0.414434 / 0.434364 (-0.019930) | 0.466821 / 0.540337 (-0.073517) | 0.629894 / 1.386936 (-0.757042) |\n\n</details>\n</details>\n\n\n",
"CI failures are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005959 / 0.011353 (-0.005394) | 0.004164 / 0.011008 (-0.006844) | 0.082336 / 0.038508 (0.043828) | 0.070344 / 0.023109 (0.047234) | 0.348032 / 0.275898 (0.072134) | 0.366328 / 0.323480 (0.042848) | 0.003882 / 0.007986 (-0.004104) | 0.003619 / 0.004328 (-0.000709) | 0.063343 / 0.004250 (0.059093) | 0.056617 / 0.037052 (0.019564) | 0.351625 / 0.258489 (0.093136) | 0.395839 / 0.293841 (0.101998) | 0.030842 / 0.128546 (-0.097704) | 0.008363 / 0.075646 (-0.067284) | 0.300535 / 0.419271 (-0.118737) | 0.053303 / 0.043533 (0.009770) | 0.354782 / 0.255139 (0.099643) | 0.364918 / 0.283200 (0.081719) | 0.025365 / 0.141683 (-0.116318) | 1.555009 / 1.452155 (0.102854) | 1.597443 / 1.492716 (0.104727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239808 / 0.018006 (0.221801) | 0.488164 / 0.000490 (0.487675) | 0.013183 / 0.000200 (0.012983) | 0.000483 / 0.000054 (0.000429) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027938 / 0.037411 (-0.009473) | 0.078521 / 0.014526 (0.063995) | 0.095498 / 0.176557 (-0.081059) | 0.150884 / 0.737135 (-0.586251) | 0.097577 / 0.296338 (-0.198762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384546 / 0.215209 (0.169337) | 4.037707 / 2.077655 (1.960053) | 1.940321 / 1.504120 (0.436201) | 1.716741 / 1.541195 (0.175546) | 1.837200 / 1.468490 (0.368710) | 0.502112 / 4.584777 (-4.082665) | 3.770452 / 3.745712 (0.024740) | 3.325691 / 5.269862 (-1.944171) | 2.015622 / 4.565676 (-2.550055) | 0.056246 / 0.424275 (-0.368029) | 0.007320 / 0.007607 (-0.000287) | 0.445553 / 0.226044 (0.219509) | 4.567233 / 2.268929 (2.298304) | 2.319531 / 55.444624 (-53.125093) | 1.968664 / 6.876477 (-4.907813) | 2.122349 / 2.142072 (-0.019724) | 0.573688 / 4.805227 (-4.231540) | 0.131410 / 6.500664 (-6.369254) | 0.062767 / 0.075469 (-0.012702) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255244 / 1.841788 (-0.586543) | 19.042480 / 8.074308 (10.968172) | 13.935342 / 10.191392 (3.743950) | 0.161259 / 0.680424 (-0.519165) | 0.020582 / 0.534201 (-0.513619) | 0.391365 / 0.579283 (-0.187918) | 0.417462 / 0.434364 (-0.016902) | 0.473121 / 0.540337 (-0.067216) | 0.674768 / 1.386936 (-0.712168) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003969 / 0.011008 (-0.007040) | 0.063558 / 0.038508 (0.025050) | 0.073847 / 0.023109 (0.050738) | 0.407064 / 0.275898 (0.131166) | 0.440695 / 0.323480 (0.117215) | 0.005783 / 0.007986 (-0.002203) | 0.003517 / 0.004328 (-0.000812) | 0.065721 / 0.004250 (0.061470) | 0.056390 / 0.037052 (0.019338) | 0.419019 / 0.258489 (0.160530) | 0.450721 / 0.293841 (0.156880) | 0.034094 / 0.128546 (-0.094452) | 0.008594 / 0.075646 (-0.067052) | 0.069254 / 0.419271 (-0.350017) | 0.049218 / 0.043533 (0.005685) | 0.413312 / 0.255139 (0.158173) | 0.439454 / 0.283200 (0.156255) | 0.021481 / 0.141683 (-0.120202) | 1.517536 / 1.452155 (0.065382) | 1.530532 / 1.492716 (0.037815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235392 / 0.018006 (0.217386) | 0.477371 / 0.000490 (0.476881) | 0.007070 / 0.000200 (0.006870) | 0.000132 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031909 / 0.037411 (-0.005502) | 0.092459 / 0.014526 (0.077933) | 0.105795 / 0.176557 (-0.070761) | 0.157745 / 0.737135 (-0.579390) | 0.104187 / 0.296338 (-0.192152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424385 / 0.215209 (0.209176) | 4.445371 / 2.077655 (2.367716) | 2.423639 / 1.504120 (0.919519) | 2.188167 / 1.541195 (0.646972) | 2.171023 / 1.468490 (0.702532) | 0.483566 / 4.584777 (-4.101211) | 3.825702 / 3.745712 (0.079990) | 3.276350 / 5.269862 (-1.993512) | 2.063075 / 4.565676 (-2.502602) | 0.061628 / 0.424275 (-0.362647) | 0.008176 / 0.007607 (0.000569) | 0.506697 / 0.226044 (0.280653) | 5.067924 / 2.268929 (2.798995) | 2.785567 / 55.444624 (-52.659057) | 2.457340 / 6.876477 (-4.419137) | 2.599646 / 2.142072 (0.457574) | 0.581550 / 4.805227 (-4.223677) | 0.131712 / 6.500664 (-6.368952) | 0.058776 / 0.075469 (-0.016693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356639 / 1.841788 (-0.485148) | 20.103463 / 8.074308 (12.029155) | 14.481010 / 10.191392 (4.289618) | 0.162870 / 0.680424 (-0.517554) | 0.023197 / 0.534201 (-0.511004) | 0.413042 / 0.579283 (-0.166241) | 0.427494 / 0.434364 (-0.006870) | 0.508457 / 0.540337 (-0.031880) | 0.662412 / 1.386936 (-0.724524) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-15T14:23:33Z
| 2023-09-19T18:02:21Z
| 2023-09-19T17:53:17Z
|
COLLABORATOR
| null | null | null |
Fix #6242
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6243/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6243.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6243",
"merged_at": "2023-09-19T17:53:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6243.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6243"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7003
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7003/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7003/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7003/events
|
https://github.com/huggingface/datasets/pull/7003
| 2,373,084,132
|
PR_kwDODunzps5zhRAK
| 7,003
|
minor fix for bfloat16
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005633 / 0.011353 (-0.005720) | 0.004366 / 0.011008 (-0.006642) | 0.064081 / 0.038508 (0.025573) | 0.031790 / 0.023109 (0.008681) | 0.239270 / 0.275898 (-0.036628) | 0.267424 / 0.323480 (-0.056055) | 0.003229 / 0.007986 (-0.004756) | 0.002849 / 0.004328 (-0.001479) | 0.050147 / 0.004250 (0.045897) | 0.046119 / 0.037052 (0.009066) | 0.253506 / 0.258489 (-0.004983) | 0.280464 / 0.293841 (-0.013377) | 0.030561 / 0.128546 (-0.097985) | 0.012258 / 0.075646 (-0.063388) | 0.212222 / 0.419271 (-0.207049) | 0.036695 / 0.043533 (-0.006838) | 0.242141 / 0.255139 (-0.012998) | 0.263014 / 0.283200 (-0.020186) | 0.020008 / 0.141683 (-0.121675) | 1.103701 / 1.452155 (-0.348453) | 1.151641 / 1.492716 (-0.341076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095884 / 0.018006 (0.077878) | 0.300858 / 0.000490 (0.300368) | 0.000209 / 0.000200 (0.000009) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018713 / 0.037411 (-0.018698) | 0.063659 / 0.014526 (0.049134) | 0.074588 / 0.176557 (-0.101968) | 0.120779 / 0.737135 (-0.616356) | 0.077768 / 0.296338 (-0.218570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281680 / 0.215209 (0.066471) | 2.754658 / 2.077655 (0.677003) | 1.454036 / 1.504120 (-0.050084) | 1.333153 / 1.541195 (-0.208042) | 1.383616 / 1.468490 (-0.084874) | 0.728933 / 4.584777 (-3.855844) | 2.374989 / 3.745712 (-1.370723) | 2.990824 / 5.269862 (-2.279038) | 1.899065 / 4.565676 (-2.666612) | 0.078657 / 0.424275 (-0.345619) | 0.005162 / 0.007607 (-0.002445) | 0.335883 / 0.226044 (0.109838) | 3.323047 / 2.268929 (1.054119) | 1.848290 / 55.444624 (-53.596335) | 1.519510 / 6.876477 (-5.356966) | 1.563608 / 2.142072 (-0.578465) | 0.807890 / 4.805227 (-3.997337) | 0.134517 / 6.500664 (-6.366147) | 0.042208 / 0.075469 (-0.033262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963634 / 1.841788 (-0.878154) | 11.617868 / 8.074308 (3.543560) | 9.804648 / 10.191392 (-0.386744) | 0.142311 / 0.680424 (-0.538113) | 0.013748 / 0.534201 (-0.520453) | 0.300309 / 0.579283 (-0.278974) | 0.268214 / 0.434364 (-0.166150) | 0.342406 / 0.540337 (-0.197931) | 0.430315 / 1.386936 (-0.956621) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005533 / 0.011353 (-0.005820) | 0.004208 / 0.011008 (-0.006800) | 0.051732 / 0.038508 (0.013224) | 0.031296 / 0.023109 (0.008187) | 0.275091 / 0.275898 (-0.000807) | 0.296889 / 0.323480 (-0.026591) | 0.004363 / 0.007986 (-0.003623) | 0.002807 / 0.004328 (-0.001522) | 0.049727 / 0.004250 (0.045476) | 0.039798 / 0.037052 (0.002746) | 0.284379 / 0.258489 (0.025890) | 0.317281 / 0.293841 (0.023440) | 0.031286 / 0.128546 (-0.097261) | 0.012384 / 0.075646 (-0.063263) | 0.061619 / 0.419271 (-0.357652) | 0.032974 / 0.043533 (-0.010559) | 0.274313 / 0.255139 (0.019174) | 0.296142 / 0.283200 (0.012943) | 0.017391 / 0.141683 (-0.124291) | 1.148369 / 1.452155 (-0.303786) | 1.171539 / 1.492716 (-0.321177) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097309 / 0.018006 (0.079302) | 0.304701 / 0.000490 (0.304212) | 0.000208 / 0.000200 (0.000008) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022382 / 0.037411 (-0.015030) | 0.077000 / 0.014526 (0.062474) | 0.088165 / 0.176557 (-0.088392) | 0.129060 / 0.737135 (-0.608075) | 0.090128 / 0.296338 (-0.206211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285308 / 0.215209 (0.070099) | 2.816680 / 2.077655 (0.739025) | 1.542418 / 1.504120 (0.038298) | 1.418567 / 1.541195 (-0.122628) | 1.447018 / 1.468490 (-0.021472) | 0.737055 / 4.584777 (-3.847722) | 0.968285 / 3.745712 (-2.777427) | 2.880120 / 5.269862 (-2.389741) | 1.921813 / 4.565676 (-2.643864) | 0.079110 / 0.424275 (-0.345165) | 0.005826 / 0.007607 (-0.001781) | 0.336441 / 0.226044 (0.110397) | 3.326384 / 2.268929 (1.057456) | 1.929205 / 55.444624 (-53.515419) | 1.618215 / 6.876477 (-5.258261) | 1.769688 / 2.142072 (-0.372385) | 0.808009 / 4.805227 (-3.997219) | 0.136384 / 6.500664 (-6.364280) | 0.041332 / 0.075469 (-0.034137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010884 / 1.841788 (-0.830903) | 12.266118 / 8.074308 (4.191810) | 10.287424 / 10.191392 (0.096032) | 0.143172 / 0.680424 (-0.537251) | 0.015798 / 0.534201 (-0.518403) | 0.301604 / 0.579283 (-0.277679) | 0.131079 / 0.434364 (-0.303285) | 0.338396 / 0.540337 (-0.201941) | 0.460721 / 1.386936 (-0.926215) |\n\n</details>\n</details>\n\n\n"
] | 2024-06-25T16:10:04Z
| 2024-06-25T16:16:11Z
| 2024-06-25T16:10:10Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7003/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7003/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7003.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7003",
"merged_at": "2024-06-25T16:10:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7003.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7003"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6967
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6967/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6967/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6967/events
|
https://github.com/huggingface/datasets/issues/6967
| 2,349,146,398
|
I_kwDODunzps6MBSEe
| 6,967
|
Method to load Laion400m
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6862868?v=4",
"events_url": "https://api.github.com/users/humanely/events{/privacy}",
"followers_url": "https://api.github.com/users/humanely/followers",
"following_url": "https://api.github.com/users/humanely/following{/other_user}",
"gists_url": "https://api.github.com/users/humanely/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/humanely",
"id": 6862868,
"login": "humanely",
"node_id": "MDQ6VXNlcjY4NjI4Njg=",
"organizations_url": "https://api.github.com/users/humanely/orgs",
"received_events_url": "https://api.github.com/users/humanely/received_events",
"repos_url": "https://api.github.com/users/humanely/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/humanely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/humanely/subscriptions",
"type": "User",
"url": "https://api.github.com/users/humanely",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-06-12T16:04:04Z
| 2024-06-12T16:04:04Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99
### Motivation
The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings files s,ealessly.
### Your contribution
I cam write the loader with some help.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6967/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6967/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6555/events
|
https://github.com/huggingface/datasets/pull/6555
| 2,063,841,286
|
PR_kwDODunzps5jIM79
| 6,555
|
Do not use Parquet exports if revision is passed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6555). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"As shared on slack, `HubDatasetModuleFactoryWithParquetExport` raises a `DatasetsServerError` already if the user tries to load another revision that the one from the parquet export. And therefore it fall backs on using `HubDatasetModuleFactoryWithScript`",
"@lhoestq I would say that although current implementation finally returns `HubDatasetModuleFactoryWithScript` as expected, with this PR we avoid the useless call to `HubDatasetModuleFactoryWithParquetExport.get_module`, so this is more optimal.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005596 / 0.011353 (-0.005757) | 0.004022 / 0.011008 (-0.006986) | 0.064041 / 0.038508 (0.025533) | 0.030683 / 0.023109 (0.007574) | 0.245236 / 0.275898 (-0.030662) | 0.269657 / 0.323480 (-0.053823) | 0.003142 / 0.007986 (-0.004844) | 0.002821 / 0.004328 (-0.001507) | 0.048774 / 0.004250 (0.044523) | 0.043771 / 0.037052 (0.006719) | 0.258202 / 0.258489 (-0.000287) | 0.288381 / 0.293841 (-0.005460) | 0.028154 / 0.128546 (-0.100392) | 0.011071 / 0.075646 (-0.064576) | 0.209836 / 0.419271 (-0.209436) | 0.035923 / 0.043533 (-0.007609) | 0.248361 / 0.255139 (-0.006777) | 0.268728 / 0.283200 (-0.014472) | 0.019982 / 0.141683 (-0.121701) | 1.172330 / 1.452155 (-0.279824) | 1.192262 / 1.492716 (-0.300455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089231 / 0.018006 (0.071225) | 0.299192 / 0.000490 (0.298702) | 0.000214 / 0.000200 (0.000014) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018358 / 0.037411 (-0.019053) | 0.062633 / 0.014526 (0.048107) | 0.076276 / 0.176557 (-0.100280) | 0.120862 / 0.737135 (-0.616274) | 0.075958 / 0.296338 (-0.220380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291575 / 0.215209 (0.076366) | 2.855908 / 2.077655 (0.778253) | 1.459891 / 1.504120 (-0.044229) | 1.374945 / 1.541195 (-0.166250) | 1.333759 / 1.468490 (-0.134731) | 0.575428 / 4.584777 (-4.009348) | 2.414253 / 3.745712 (-1.331459) | 2.768222 / 5.269862 (-2.501639) | 1.705005 / 4.565676 (-2.860672) | 0.063406 / 0.424275 (-0.360869) | 0.004981 / 0.007607 (-0.002626) | 0.343826 / 0.226044 (0.117781) | 3.418143 / 2.268929 (1.149215) | 1.856571 / 55.444624 (-53.588053) | 1.571318 / 6.876477 (-5.305159) | 1.609897 / 2.142072 (-0.532175) | 0.646779 / 4.805227 (-4.158448) | 0.118143 / 6.500664 (-6.382521) | 0.042408 / 0.075469 (-0.033061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965091 / 1.841788 (-0.876697) | 11.569655 / 8.074308 (3.495347) | 10.587818 / 10.191392 (0.396426) | 0.128518 / 0.680424 (-0.551905) | 0.013954 / 0.534201 (-0.520247) | 0.287244 / 0.579283 (-0.292039) | 0.263755 / 0.434364 (-0.170609) | 0.321661 / 0.540337 (-0.218676) | 0.428753 / 1.386936 (-0.958183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005568 / 0.011353 (-0.005785) | 0.003755 / 0.011008 (-0.007253) | 0.049134 / 0.038508 (0.010626) | 0.032113 / 0.023109 (0.009004) | 0.276645 / 0.275898 (0.000747) | 0.299240 / 0.323480 (-0.024240) | 0.004297 / 0.007986 (-0.003689) | 0.002727 / 0.004328 (-0.001602) | 0.048420 / 0.004250 (0.044170) | 0.045070 / 0.037052 (0.008017) | 0.288597 / 0.258489 (0.030108) | 0.320824 / 0.293841 (0.026983) | 0.053293 / 0.128546 (-0.075253) | 0.011002 / 0.075646 (-0.064644) | 0.057747 / 0.419271 (-0.361524) | 0.034389 / 0.043533 (-0.009143) | 0.277914 / 0.255139 (0.022775) | 0.292919 / 0.283200 (0.009719) | 0.018252 / 0.141683 (-0.123431) | 1.187245 / 1.452155 (-0.264910) | 1.199823 / 1.492716 (-0.292893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088338 / 0.018006 (0.070332) | 0.297498 / 0.000490 (0.297008) | 0.000206 / 0.000200 (0.000006) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021445 / 0.037411 (-0.015966) | 0.075522 / 0.014526 (0.060996) | 0.086010 / 0.176557 (-0.090546) | 0.124938 / 0.737135 (-0.612197) | 0.087542 / 0.296338 (-0.208796) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292460 / 0.215209 (0.077251) | 2.841290 / 2.077655 (0.763635) | 1.537941 / 1.504120 (0.033821) | 1.409903 / 1.541195 (-0.131291) | 1.435339 / 1.468490 (-0.033151) | 0.578967 / 4.584777 (-4.005810) | 2.398588 / 3.745712 (-1.347125) | 2.662342 / 5.269862 (-2.607520) | 1.743055 / 4.565676 (-2.822622) | 0.064043 / 0.424275 (-0.360232) | 0.005030 / 0.007607 (-0.002577) | 0.348542 / 0.226044 (0.122498) | 3.395854 / 2.268929 (1.126926) | 1.918935 / 55.444624 (-53.525689) | 1.639320 / 6.876477 (-5.237157) | 1.740406 / 2.142072 (-0.401666) | 0.653346 / 4.805227 (-4.151881) | 0.117298 / 6.500664 (-6.383366) | 0.040635 / 0.075469 (-0.034834) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008277 / 1.841788 (-0.833510) | 12.069369 / 8.074308 (3.995061) | 10.967322 / 10.191392 (0.775930) | 0.131938 / 0.680424 (-0.548486) | 0.015418 / 0.534201 (-0.518783) | 0.297257 / 0.579283 (-0.282026) | 0.270742 / 0.434364 (-0.163622) | 0.332296 / 0.540337 (-0.208042) | 0.421606 / 1.386936 (-0.965330) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-03T11:33:10Z
| 2024-02-02T10:41:33Z
| 2024-02-02T10:35:28Z
|
MEMBER
| null | null | null |
Fix #6554.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6555/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6555/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6555.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6555",
"merged_at": "2024-02-02T10:35:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6555.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6555"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6343
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6343/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6343/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6343/events
|
https://github.com/huggingface/datasets/pull/6343
| 1,957,370,711
|
PR_kwDODunzps5dipeb
| 6,343
|
Remove unused argument in `_get_data_files_patterns`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006584 / 0.011353 (-0.004769) | 0.004197 / 0.011008 (-0.006812) | 0.083598 / 0.038508 (0.045090) | 0.075502 / 0.023109 (0.052392) | 0.312986 / 0.275898 (0.037088) | 0.344630 / 0.323480 (0.021150) | 0.005394 / 0.007986 (-0.002591) | 0.003485 / 0.004328 (-0.000843) | 0.064529 / 0.004250 (0.060279) | 0.055003 / 0.037052 (0.017950) | 0.320522 / 0.258489 (0.062033) | 0.362623 / 0.293841 (0.068782) | 0.030900 / 0.128546 (-0.097646) | 0.008459 / 0.075646 (-0.067187) | 0.286986 / 0.419271 (-0.132285) | 0.052310 / 0.043533 (0.008777) | 0.315873 / 0.255139 (0.060734) | 0.333962 / 0.283200 (0.050762) | 0.023836 / 0.141683 (-0.117847) | 1.481806 / 1.452155 (0.029651) | 1.567926 / 1.492716 (0.075209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268188 / 0.018006 (0.250182) | 0.520542 / 0.000490 (0.520052) | 0.017617 / 0.000200 (0.017417) | 0.000631 / 0.000054 (0.000577) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028828 / 0.037411 (-0.008584) | 0.083028 / 0.014526 (0.068502) | 0.099808 / 0.176557 (-0.076748) | 0.154282 / 0.737135 (-0.582853) | 0.098590 / 0.296338 (-0.197748) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407548 / 0.215209 (0.192339) | 4.066128 / 2.077655 (1.988474) | 2.036757 / 1.504120 (0.532637) | 1.870130 / 1.541195 (0.328935) | 1.949031 / 1.468490 (0.480541) | 0.489263 / 4.584777 (-4.095514) | 3.506269 / 3.745712 (-0.239443) | 3.457232 / 5.269862 (-1.812629) | 2.060097 / 4.565676 (-2.505580) | 0.057252 / 0.424275 (-0.367024) | 0.007727 / 0.007607 (0.000120) | 0.480229 / 0.226044 (0.254185) | 4.807064 / 2.268929 (2.538135) | 2.495438 / 55.444624 (-52.949186) | 2.186194 / 6.876477 (-4.690283) | 2.243372 / 2.142072 (0.101300) | 0.580550 / 4.805227 (-4.224678) | 0.135398 / 6.500664 (-6.365266) | 0.061878 / 0.075469 (-0.013591) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305635 / 1.841788 (-0.536152) | 19.194421 / 8.074308 (11.120113) | 14.531699 / 10.191392 (4.340307) | 0.167144 / 0.680424 (-0.513280) | 0.018270 / 0.534201 (-0.515931) | 0.393702 / 0.579283 (-0.185581) | 0.406518 / 0.434364 (-0.027846) | 0.458126 / 0.540337 (-0.082211) | 0.639839 / 1.386936 (-0.747097) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006742 / 0.011353 (-0.004611) | 0.004092 / 0.011008 (-0.006916) | 0.065547 / 0.038508 (0.027039) | 0.076293 / 0.023109 (0.053184) | 0.389701 / 0.275898 (0.113803) | 0.429158 / 0.323480 (0.105678) | 0.005606 / 0.007986 (-0.002380) | 0.003491 / 0.004328 (-0.000837) | 0.065903 / 0.004250 (0.061653) | 0.057346 / 0.037052 (0.020293) | 0.393233 / 0.258489 (0.134744) | 0.433106 / 0.293841 (0.139265) | 0.032612 / 0.128546 (-0.095934) | 0.008777 / 0.075646 (-0.066869) | 0.073135 / 0.419271 (-0.346137) | 0.048167 / 0.043533 (0.004635) | 0.389309 / 0.255139 (0.134170) | 0.416442 / 0.283200 (0.133242) | 0.022839 / 0.141683 (-0.118844) | 1.531607 / 1.452155 (0.079453) | 1.598950 / 1.492716 (0.106234) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254856 / 0.018006 (0.236850) | 0.528186 / 0.000490 (0.527697) | 0.006975 / 0.000200 (0.006775) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032377 / 0.037411 (-0.005034) | 0.092706 / 0.014526 (0.078180) | 0.107618 / 0.176557 (-0.068939) | 0.160103 / 0.737135 (-0.577032) | 0.107226 / 0.296338 (-0.189112) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430922 / 0.215209 (0.215713) | 4.312556 / 2.077655 (2.234901) | 2.287686 / 1.504120 (0.783567) | 2.111103 / 1.541195 (0.569908) | 2.284105 / 1.468490 (0.815614) | 0.485987 / 4.584777 (-4.098790) | 3.557320 / 3.745712 (-0.188392) | 3.341150 / 5.269862 (-1.928711) | 2.056705 / 4.565676 (-2.508972) | 0.057265 / 0.424275 (-0.367010) | 0.007264 / 0.007607 (-0.000344) | 0.505191 / 0.226044 (0.279146) | 5.045379 / 2.268929 (2.776450) | 2.732357 / 55.444624 (-52.712267) | 2.390256 / 6.876477 (-4.486220) | 2.643676 / 2.142072 (0.501604) | 0.584630 / 4.805227 (-4.220597) | 0.132402 / 6.500664 (-6.368262) | 0.061387 / 0.075469 (-0.014082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340721 / 1.841788 (-0.501066) | 19.744145 / 8.074308 (11.669837) | 14.694482 / 10.191392 (4.503090) | 0.166294 / 0.680424 (-0.514129) | 0.020691 / 0.534201 (-0.513510) | 0.398359 / 0.579283 (-0.180924) | 0.423831 / 0.434364 (-0.010533) | 0.474365 / 0.540337 (-0.065972) | 0.649410 / 1.386936 (-0.737526) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004369 / 0.011353 (-0.006984) | 0.002728 / 0.011008 (-0.008280) | 0.063754 / 0.038508 (0.025246) | 0.029396 / 0.023109 (0.006287) | 0.269409 / 0.275898 (-0.006489) | 0.287654 / 0.323480 (-0.035826) | 0.003926 / 0.007986 (-0.004060) | 0.002366 / 0.004328 (-0.001963) | 0.048910 / 0.004250 (0.044660) | 0.043126 / 0.037052 (0.006074) | 0.260774 / 0.258489 (0.002285) | 0.299996 / 0.293841 (0.006155) | 0.023359 / 0.128546 (-0.105187) | 0.007259 / 0.075646 (-0.068388) | 0.211412 / 0.419271 (-0.207860) | 0.053883 / 0.043533 (0.010350) | 0.268946 / 0.255139 (0.013807) | 0.287664 / 0.283200 (0.004465) | 0.017600 / 0.141683 (-0.124083) | 1.096478 / 1.452155 (-0.355676) | 1.193063 / 1.492716 (-0.299653) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090985 / 0.018006 (0.072979) | 0.287168 / 0.000490 (0.286678) | 0.000208 / 0.000200 (0.000009) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019238 / 0.037411 (-0.018173) | 0.062660 / 0.014526 (0.048134) | 0.073414 / 0.176557 (-0.103143) | 0.120842 / 0.737135 (-0.616294) | 0.077658 / 0.296338 (-0.218681) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280285 / 0.215209 (0.065076) | 2.729807 / 2.077655 (0.652152) | 1.430686 / 1.504120 (-0.073434) | 1.307260 / 1.541195 (-0.233935) | 1.321013 / 1.468490 (-0.147477) | 0.387253 / 4.584777 (-4.197524) | 2.415635 / 3.745712 (-1.330077) | 2.557206 / 5.269862 (-2.712656) | 1.553224 / 4.565676 (-3.012453) | 0.045402 / 0.424275 (-0.378873) | 0.004798 / 0.007607 (-0.002809) | 0.330493 / 0.226044 (0.104449) | 3.226835 / 2.268929 (0.957906) | 1.739068 / 55.444624 (-53.705557) | 1.494841 / 6.876477 (-5.381636) | 1.528253 / 2.142072 (-0.613820) | 0.451525 / 4.805227 (-4.353702) | 0.096620 / 6.500664 (-6.404044) | 0.041176 / 0.075469 (-0.034293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930892 / 1.841788 (-0.910896) | 11.343351 / 8.074308 (3.269043) | 10.420327 / 10.191392 (0.228935) | 0.137629 / 0.680424 (-0.542795) | 0.013907 / 0.534201 (-0.520293) | 0.267778 / 0.579283 (-0.311505) | 0.260774 / 0.434364 (-0.173590) | 0.308213 / 0.540337 (-0.232124) | 0.419659 / 1.386936 (-0.967277) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004867 / 0.011353 (-0.006486) | 0.002830 / 0.011008 (-0.008178) | 0.048506 / 0.038508 (0.009998) | 0.048190 / 0.023109 (0.025080) | 0.279995 / 0.275898 (0.004097) | 0.296396 / 0.323480 (-0.027083) | 0.004700 / 0.007986 (-0.003285) | 0.003546 / 0.004328 (-0.000782) | 0.048237 / 0.004250 (0.043987) | 0.037102 / 0.037052 (0.000050) | 0.284582 / 0.258489 (0.026093) | 0.315896 / 0.293841 (0.022055) | 0.024699 / 0.128546 (-0.103848) | 0.007077 / 0.075646 (-0.068569) | 0.054471 / 0.419271 (-0.364800) | 0.032537 / 0.043533 (-0.010996) | 0.276761 / 0.255139 (0.021622) | 0.294741 / 0.283200 (0.011542) | 0.017766 / 0.141683 (-0.123917) | 1.118377 / 1.452155 (-0.333778) | 1.186617 / 1.492716 (-0.306100) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088981 / 0.018006 (0.070975) | 0.297793 / 0.000490 (0.297303) | 0.000220 / 0.000200 (0.000020) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021300 / 0.037411 (-0.016111) | 0.070059 / 0.014526 (0.055533) | 0.080452 / 0.176557 (-0.096104) | 0.118461 / 0.737135 (-0.618674) | 0.081099 / 0.296338 (-0.215240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300560 / 0.215209 (0.085351) | 2.951461 / 2.077655 (0.873806) | 1.621978 / 1.504120 (0.117858) | 1.478871 / 1.541195 (-0.062324) | 1.520732 / 1.468490 (0.052242) | 0.408625 / 4.584777 (-4.176152) | 2.407253 / 3.745712 (-1.338459) | 2.546000 / 5.269862 (-2.723861) | 1.525920 / 4.565676 (-3.039757) | 0.046817 / 0.424275 (-0.377458) | 0.004880 / 0.007607 (-0.002727) | 0.350866 / 0.226044 (0.124821) | 3.489379 / 2.268929 (1.220451) | 1.967197 / 55.444624 (-53.477427) | 1.686083 / 6.876477 (-5.190394) | 1.699307 / 2.142072 (-0.442766) | 0.479659 / 4.805227 (-4.325568) | 0.098853 / 6.500664 (-6.401811) | 0.040718 / 0.075469 (-0.034751) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.018352 / 1.841788 (-0.823436) | 12.022551 / 8.074308 (3.948243) | 10.841890 / 10.191392 (0.650498) | 0.130732 / 0.680424 (-0.549692) | 0.016334 / 0.534201 (-0.517867) | 0.271984 / 0.579283 (-0.307299) | 0.276733 / 0.434364 (-0.157631) | 0.308049 / 0.540337 (-0.232289) | 0.415428 / 1.386936 (-0.971508) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-23T14:54:18Z
| 2023-11-16T09:09:42Z
| 2023-11-16T09:03:39Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6343/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6343/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6343.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6343",
"merged_at": "2023-11-16T09:03:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6343.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6343"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4648
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4648/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4648/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4648/events
|
https://github.com/huggingface/datasets/issues/4648
| 1,296,659,335
|
I_kwDODunzps5NSXOH
| 4,648
|
Add WikiAnswers dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/WikiAnswers)"
] | 2022-07-07T01:06:37Z
| 2022-07-14T02:03:40Z
| 2022-07-14T02:03:40Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Adding a Dataset
- **Name:** *WikiAnswers*
- **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.*
- **Paper:** *https://dl.acm.org/doi/10.1145/2623330.2623677*
- **Data:** *https://github.com/afader/oqa#wikianswers-corpus*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4648/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4648/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5652
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5652/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5652/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5652/events
|
https://github.com/huggingface/datasets/pull/5652
| 1,632,546,073
|
PR_kwDODunzps5MeVUR
| 5,652
|
Copy features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007455 / 0.011353 (-0.003898) | 0.005278 / 0.011008 (-0.005731) | 0.098981 / 0.038508 (0.060473) | 0.029208 / 0.023109 (0.006099) | 0.304132 / 0.275898 (0.028234) | 0.340010 / 0.323480 (0.016530) | 0.005514 / 0.007986 (-0.002472) | 0.003636 / 0.004328 (-0.000692) | 0.076737 / 0.004250 (0.072486) | 0.041985 / 0.037052 (0.004933) | 0.314941 / 0.258489 (0.056452) | 0.346686 / 0.293841 (0.052845) | 0.032528 / 0.128546 (-0.096018) | 0.011795 / 0.075646 (-0.063851) | 0.322122 / 0.419271 (-0.097150) | 0.051548 / 0.043533 (0.008015) | 0.310561 / 0.255139 (0.055422) | 0.329443 / 0.283200 (0.046243) | 0.092820 / 0.141683 (-0.048863) | 1.495764 / 1.452155 (0.043609) | 1.586734 / 1.492716 (0.094018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195830 / 0.018006 (0.177824) | 0.422075 / 0.000490 (0.421586) | 0.005483 / 0.000200 (0.005283) | 0.000133 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023468 / 0.037411 (-0.013943) | 0.097713 / 0.014526 (0.083187) | 0.105331 / 0.176557 (-0.071225) | 0.166237 / 0.737135 (-0.570898) | 0.108924 / 0.296338 (-0.187415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671901 / 0.215209 (0.456692) | 6.745691 / 2.077655 (4.668036) | 2.132508 / 1.504120 (0.628388) | 1.693808 / 1.541195 (0.152614) | 1.715282 / 1.468490 (0.246792) | 0.955354 / 4.584777 (-3.629422) | 3.810296 / 3.745712 (0.064584) | 2.214891 / 5.269862 (-3.054970) | 1.461513 / 4.565676 (-3.104164) | 0.109846 / 0.424275 (-0.314430) | 0.013546 / 0.007607 (0.005939) | 0.780046 / 0.226044 (0.554001) | 7.789020 / 2.268929 (5.520091) | 2.602411 / 55.444624 (-52.842213) | 1.995096 / 6.876477 (-4.881380) | 2.009022 / 2.142072 (-0.133051) | 1.069215 / 4.805227 (-3.736012) | 0.179812 / 6.500664 (-6.320852) | 0.068125 / 0.075469 (-0.007344) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201866 / 1.841788 (-0.639921) | 13.878814 / 8.074308 (5.804506) | 14.179264 / 10.191392 (3.987872) | 0.128908 / 0.680424 (-0.551515) | 0.017257 / 0.534201 (-0.516944) | 0.379500 / 0.579283 (-0.199783) | 0.393308 / 0.434364 (-0.041056) | 0.444700 / 0.540337 (-0.095638) | 0.531043 / 1.386936 (-0.855893) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007413 / 0.011353 (-0.003940) | 0.005431 / 0.011008 (-0.005577) | 0.078158 / 0.038508 (0.039650) | 0.028837 / 0.023109 (0.005728) | 0.343635 / 0.275898 (0.067737) | 0.383041 / 0.323480 (0.059561) | 0.005283 / 0.007986 (-0.002703) | 0.003673 / 0.004328 (-0.000655) | 0.076461 / 0.004250 (0.072211) | 0.038625 / 0.037052 (0.001573) | 0.341109 / 0.258489 (0.082620) | 0.387027 / 0.293841 (0.093186) | 0.032512 / 0.128546 (-0.096034) | 0.011903 / 0.075646 (-0.063744) | 0.086340 / 0.419271 (-0.332931) | 0.043211 / 0.043533 (-0.000321) | 0.339994 / 0.255139 (0.084855) | 0.370868 / 0.283200 (0.087668) | 0.091679 / 0.141683 (-0.050004) | 1.547188 / 1.452155 (0.095033) | 1.578545 / 1.492716 (0.085829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216981 / 0.018006 (0.198975) | 0.412206 / 0.000490 (0.411716) | 0.004243 / 0.000200 (0.004043) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025392 / 0.037411 (-0.012020) | 0.102577 / 0.014526 (0.088051) | 0.107672 / 0.176557 (-0.068884) | 0.160657 / 0.737135 (-0.576478) | 0.111646 / 0.296338 (-0.184692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.698815 / 0.215209 (0.483606) | 6.958931 / 2.077655 (4.881276) | 2.344216 / 1.504120 (0.840096) | 1.907752 / 1.541195 (0.366557) | 1.964251 / 1.468490 (0.495761) | 0.950754 / 4.584777 (-3.634023) | 3.829700 / 3.745712 (0.083988) | 3.055565 / 5.269862 (-2.214297) | 1.575851 / 4.565676 (-2.989825) | 0.109227 / 0.424275 (-0.315048) | 0.013163 / 0.007607 (0.005556) | 0.804613 / 0.226044 (0.578569) | 8.015035 / 2.268929 (5.746107) | 2.796358 / 55.444624 (-52.648266) | 2.212561 / 6.876477 (-4.663916) | 2.229918 / 2.142072 (0.087845) | 1.062041 / 4.805227 (-3.743186) | 0.181384 / 6.500664 (-6.319280) | 0.068564 / 0.075469 (-0.006905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.287904 / 1.841788 (-0.553884) | 14.539222 / 8.074308 (6.464914) | 14.232097 / 10.191392 (4.040705) | 0.130870 / 0.680424 (-0.549554) | 0.016710 / 0.534201 (-0.517491) | 0.384454 / 0.579283 (-0.194829) | 0.391750 / 0.434364 (-0.042614) | 0.443995 / 0.540337 (-0.096343) | 0.526255 / 1.386936 (-0.860681) |\n\n</details>\n</details>\n\n\n",
"Arf I need to fix some tests first - sorry",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008393 / 0.011353 (-0.002959) | 0.005635 / 0.011008 (-0.005373) | 0.114840 / 0.038508 (0.076332) | 0.039272 / 0.023109 (0.016163) | 0.352116 / 0.275898 (0.076218) | 0.386614 / 0.323480 (0.063134) | 0.006348 / 0.007986 (-0.001638) | 0.005872 / 0.004328 (0.001544) | 0.086437 / 0.004250 (0.082187) | 0.054003 / 0.037052 (0.016951) | 0.350302 / 0.258489 (0.091813) | 0.400148 / 0.293841 (0.106308) | 0.042436 / 0.128546 (-0.086111) | 0.013987 / 0.075646 (-0.061660) | 0.399434 / 0.419271 (-0.019837) | 0.059223 / 0.043533 (0.015690) | 0.354511 / 0.255139 (0.099372) | 0.377764 / 0.283200 (0.094564) | 0.112297 / 0.141683 (-0.029386) | 1.677483 / 1.452155 (0.225328) | 1.784942 / 1.492716 (0.292226) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233334 / 0.018006 (0.215328) | 0.450575 / 0.000490 (0.450085) | 0.000376 / 0.000200 (0.000176) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031995 / 0.037411 (-0.005416) | 0.126798 / 0.014526 (0.112272) | 0.138453 / 0.176557 (-0.038104) | 0.207360 / 0.737135 (-0.529775) | 0.147744 / 0.296338 (-0.148594) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.481496 / 0.215209 (0.266287) | 4.810495 / 2.077655 (2.732840) | 2.457917 / 1.504120 (0.953797) | 2.300073 / 1.541195 (0.758879) | 2.065595 / 1.468490 (0.597105) | 0.814589 / 4.584777 (-3.770188) | 4.566496 / 3.745712 (0.820784) | 2.386947 / 5.269862 (-2.882914) | 1.531639 / 4.565676 (-3.034037) | 0.099569 / 0.424275 (-0.324706) | 0.014971 / 0.007607 (0.007364) | 0.590359 / 0.226044 (0.364314) | 5.885250 / 2.268929 (3.616322) | 2.706799 / 55.444624 (-52.737826) | 2.324485 / 6.876477 (-4.551992) | 2.452751 / 2.142072 (0.310678) | 0.966955 / 4.805227 (-3.838272) | 0.198165 / 6.500664 (-6.302499) | 0.076877 / 0.075469 (0.001408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499085 / 1.841788 (-0.342702) | 17.705516 / 8.074308 (9.631208) | 16.481174 / 10.191392 (6.289782) | 0.191832 / 0.680424 (-0.488592) | 0.021417 / 0.534201 (-0.512784) | 0.519647 / 0.579283 (-0.059636) | 0.498432 / 0.434364 (0.064068) | 0.598206 / 0.540337 (0.057868) | 0.700990 / 1.386936 (-0.685946) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008746 / 0.011353 (-0.002607) | 0.006052 / 0.011008 (-0.004956) | 0.092938 / 0.038508 (0.054430) | 0.038932 / 0.023109 (0.015823) | 0.406919 / 0.275898 (0.131021) | 0.444325 / 0.323480 (0.120845) | 0.006735 / 0.007986 (-0.001251) | 0.005972 / 0.004328 (0.001643) | 0.088152 / 0.004250 (0.083902) | 0.051009 / 0.037052 (0.013957) | 0.407415 / 0.258489 (0.148926) | 0.481048 / 0.293841 (0.187207) | 0.043268 / 0.128546 (-0.085278) | 0.014574 / 0.075646 (-0.061072) | 0.103555 / 0.419271 (-0.315716) | 0.058251 / 0.043533 (0.014719) | 0.406294 / 0.255139 (0.151155) | 0.429229 / 0.283200 (0.146029) | 0.116977 / 0.141683 (-0.024705) | 1.765885 / 1.452155 (0.313730) | 1.885557 / 1.492716 (0.392841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284014 / 0.018006 (0.266008) | 0.458066 / 0.000490 (0.457576) | 0.022286 / 0.000200 (0.022086) | 0.000158 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033971 / 0.037411 (-0.003440) | 0.132030 / 0.014526 (0.117504) | 0.141725 / 0.176557 (-0.034831) | 0.199818 / 0.737135 (-0.537318) | 0.149176 / 0.296338 (-0.147162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.511463 / 0.215209 (0.296254) | 4.917921 / 2.077655 (2.840267) | 2.382377 / 1.504120 (0.878257) | 2.154599 / 1.541195 (0.613404) | 2.247858 / 1.468490 (0.779368) | 0.834524 / 4.584777 (-3.750253) | 4.560010 / 3.745712 (0.814297) | 2.403055 / 5.269862 (-2.866806) | 1.780784 / 4.565676 (-2.784893) | 0.101409 / 0.424275 (-0.322866) | 0.014657 / 0.007607 (0.007050) | 0.610137 / 0.226044 (0.384093) | 6.051011 / 2.268929 (3.782083) | 2.887357 / 55.444624 (-52.557267) | 2.518225 / 6.876477 (-4.358252) | 2.559654 / 2.142072 (0.417582) | 0.981226 / 4.805227 (-3.824001) | 0.197323 / 6.500664 (-6.303341) | 0.076851 / 0.075469 (0.001382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.554662 / 1.841788 (-0.287126) | 18.038993 / 8.074308 (9.964685) | 16.719948 / 10.191392 (6.528556) | 0.195641 / 0.680424 (-0.484783) | 0.020699 / 0.534201 (-0.513502) | 0.498949 / 0.579283 (-0.080334) | 0.487775 / 0.434364 (0.053411) | 0.591413 / 0.540337 (0.051075) | 0.708520 / 1.386936 (-0.678416) |\n\n</details>\n</details>\n\n\n",
"Ready for review @mariosasko :)",
"Yea it does behave as expected, but modifying a dataset's features dict is not recommended and can lead to unpredictable behaviors. By copying the features, we make sure users don't modify the dataset's features dict.\r\n\r\nSince the attribute is public, users expect to be able to do whatever they want with it, without checking if they have to copy it or not",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008069 / 0.011353 (-0.003284) | 0.005051 / 0.011008 (-0.005958) | 0.096587 / 0.038508 (0.058079) | 0.032954 / 0.023109 (0.009844) | 0.317877 / 0.275898 (0.041979) | 0.328677 / 0.323480 (0.005197) | 0.005524 / 0.007986 (-0.002462) | 0.003958 / 0.004328 (-0.000370) | 0.072692 / 0.004250 (0.068441) | 0.044554 / 0.037052 (0.007502) | 0.311121 / 0.258489 (0.052632) | 0.355912 / 0.293841 (0.062071) | 0.035934 / 0.128546 (-0.092612) | 0.012056 / 0.075646 (-0.063590) | 0.332575 / 0.419271 (-0.086696) | 0.049788 / 0.043533 (0.006255) | 0.307918 / 0.255139 (0.052779) | 0.326757 / 0.283200 (0.043557) | 0.098671 / 0.141683 (-0.043012) | 1.424625 / 1.452155 (-0.027530) | 1.507944 / 1.492716 (0.015228) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207976 / 0.018006 (0.189970) | 0.439604 / 0.000490 (0.439114) | 0.000435 / 0.000200 (0.000235) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026961 / 0.037411 (-0.010451) | 0.106627 / 0.014526 (0.092101) | 0.115292 / 0.176557 (-0.061264) | 0.171901 / 0.737135 (-0.565234) | 0.123276 / 0.296338 (-0.173062) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407679 / 0.215209 (0.192469) | 4.071958 / 2.077655 (1.994303) | 1.854270 / 1.504120 (0.350151) | 1.678406 / 1.541195 (0.137211) | 1.715890 / 1.468490 (0.247400) | 0.705536 / 4.584777 (-3.879241) | 3.774198 / 3.745712 (0.028486) | 2.096429 / 5.269862 (-3.173432) | 1.431810 / 4.565676 (-3.133866) | 0.085557 / 0.424275 (-0.338718) | 0.012191 / 0.007607 (0.004584) | 0.502937 / 0.226044 (0.276893) | 5.034391 / 2.268929 (2.765463) | 2.393826 / 55.444624 (-53.050799) | 2.037383 / 6.876477 (-4.839094) | 2.192037 / 2.142072 (0.049964) | 0.829298 / 4.805227 (-3.975929) | 0.167781 / 6.500664 (-6.332883) | 0.063405 / 0.075469 (-0.012064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.179189 / 1.841788 (-0.662599) | 14.464132 / 8.074308 (6.389824) | 14.869024 / 10.191392 (4.677632) | 0.172864 / 0.680424 (-0.507560) | 0.017817 / 0.534201 (-0.516384) | 0.427849 / 0.579283 (-0.151434) | 0.434447 / 0.434364 (0.000083) | 0.502077 / 0.540337 (-0.038260) | 0.599587 / 1.386936 (-0.787349) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007366 / 0.011353 (-0.003987) | 0.004939 / 0.011008 (-0.006069) | 0.074982 / 0.038508 (0.036474) | 0.032611 / 0.023109 (0.009501) | 0.340670 / 0.275898 (0.064772) | 0.372471 / 0.323480 (0.048991) | 0.005567 / 0.007986 (-0.002418) | 0.003956 / 0.004328 (-0.000372) | 0.074550 / 0.004250 (0.070300) | 0.047097 / 0.037052 (0.010045) | 0.337049 / 0.258489 (0.078560) | 0.391512 / 0.293841 (0.097671) | 0.035712 / 0.128546 (-0.092835) | 0.012040 / 0.075646 (-0.063606) | 0.087126 / 0.419271 (-0.332146) | 0.048290 / 0.043533 (0.004757) | 0.335069 / 0.255139 (0.079930) | 0.362080 / 0.283200 (0.078881) | 0.098606 / 0.141683 (-0.043077) | 1.456802 / 1.452155 (0.004647) | 1.554652 / 1.492716 (0.061936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200015 / 0.018006 (0.182009) | 0.442772 / 0.000490 (0.442283) | 0.004594 / 0.000200 (0.004394) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028510 / 0.037411 (-0.008901) | 0.109654 / 0.014526 (0.095128) | 0.119921 / 0.176557 (-0.056636) | 0.170289 / 0.737135 (-0.566846) | 0.125288 / 0.296338 (-0.171051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430919 / 0.215209 (0.215710) | 4.274132 / 2.077655 (2.196478) | 2.014385 / 1.504120 (0.510265) | 1.822094 / 1.541195 (0.280899) | 1.938188 / 1.468490 (0.469698) | 0.707812 / 4.584777 (-3.876965) | 3.925730 / 3.745712 (0.180018) | 2.117481 / 5.269862 (-3.152381) | 1.369521 / 4.565676 (-3.196155) | 0.088414 / 0.424275 (-0.335861) | 0.013101 / 0.007607 (0.005494) | 0.538468 / 0.226044 (0.312424) | 5.384614 / 2.268929 (3.115685) | 2.487709 / 55.444624 (-52.956915) | 2.152060 / 6.876477 (-4.724417) | 2.225777 / 2.142072 (0.083705) | 0.856749 / 4.805227 (-3.948479) | 0.173299 / 6.500664 (-6.327366) | 0.068872 / 0.075469 (-0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268009 / 1.841788 (-0.573778) | 15.102648 / 8.074308 (7.028340) | 14.216810 / 10.191392 (4.025418) | 0.163661 / 0.680424 (-0.516763) | 0.017394 / 0.534201 (-0.516807) | 0.418030 / 0.579283 (-0.161253) | 0.413717 / 0.434364 (-0.020647) | 0.487526 / 0.540337 (-0.052811) | 0.581499 / 1.386936 (-0.805437) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-20T17:17:23Z
| 2023-03-23T13:19:19Z
| 2023-03-23T13:12:08Z
|
MEMBER
| null | null | null |
Some users (even internally at HF) are doing
```python
dset_features = dset.features
dset_features.pop(col_to_remove)
dset = dset.map(..., features=dset_features)
```
Right now this causes issues because it modifies the features dict in place before the map.
In this PR I modified `dset.features` to return a copy of the features, so that users can modify it if they want.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5652/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5652/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5652",
"merged_at": "2023-03-23T13:12:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5652"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7303
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7303/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7303/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7303/events
|
https://github.com/huggingface/datasets/issues/7303
| 2,705,729,696
|
I_kwDODunzps6hRiig
| 7,303
|
DataFilesNotFoundError for datasets LM1B
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}",
"followers_url": "https://api.github.com/users/hml1996-fight/followers",
"following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}",
"gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hml1996-fight",
"id": 72264324,
"login": "hml1996-fight",
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"organizations_url": "https://api.github.com/users/hml1996-fight/orgs",
"received_events_url": "https://api.github.com/users/hml1996-fight/received_events",
"repos_url": "https://api.github.com/users/hml1996-fight/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hml1996-fight",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Can you try with a more recent version of `datasets` ? Also you might need to pass trust_remote_code=True since it's a script based dataset"
] | 2024-11-29T17:27:45Z
| 2024-12-11T13:22:47Z
| 2024-12-11T13:22:47Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b
### Steps to reproduce the bug
`dataset = datasets.load_dataset('lm1b', split=split)`
### Expected behavior
`Traceback (most recent call last):
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/word_freq.py", line 13, in <module>
train_data = DiffusionLoader(tokenizer=tokenizer).my_load(task_name='lm1b', splits=['train'])[0]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in my_load
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in <listcomp>
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 13, in _load
dataset = datasets.load_dataset('lm1b', split=split)
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset
builder_instance = load_dataset_builder(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1827, in dataset_module_factory
).get_module()
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1040, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 598, in infer_module_for_data_files
raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in lm1b`
### Environment info
datasets: 2.20.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}",
"followers_url": "https://api.github.com/users/hml1996-fight/followers",
"following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}",
"gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hml1996-fight",
"id": 72264324,
"login": "hml1996-fight",
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"organizations_url": "https://api.github.com/users/hml1996-fight/orgs",
"received_events_url": "https://api.github.com/users/hml1996-fight/received_events",
"repos_url": "https://api.github.com/users/hml1996-fight/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hml1996-fight",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7303/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7303/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5426
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5426/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5426/events
|
https://github.com/huggingface/datasets/issues/5426
| 1,535,158,555
|
I_kwDODunzps5bgKkb
| 5,426
|
CI tests are broken: SchemaInferenceError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-01-16T16:02:07Z
| 2023-06-02T06:40:32Z
| 2023-01-16T16:49:04Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
```
Stack trace:
```
______________ BeamBuilderTest.test_download_and_prepare_sharded _______________
[gw1] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
self = <tests.test_beam.BeamBuilderTest testMethod=test_download_and_prepare_sharded>
@require_beam
def test_download_and_prepare_sharded(self):
import apache_beam as beam
original_write_parquet = beam.io.parquetio.WriteToParquet
expected_num_examples = len(get_test_dummy_examples())
with tempfile.TemporaryDirectory() as tmp_cache_dir:
builder = DummyBeamDataset(cache_dir=tmp_cache_dir, beam_runner="DirectRunner")
with patch("apache_beam.io.parquetio.WriteToParquet") as write_parquet_mock:
write_parquet_mock.side_effect = partial(original_write_parquet, num_shards=2)
> builder.download_and_prepare()
tests/test_beam.py:97:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:864: in download_and_prepare
**download_and_prepare_kwargs,
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:1976: in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:694: in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:740: in parquet_to_arrow
num_bytes, num_examples = writer.finalize()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <datasets.arrow_writer.ArrowWriter object at 0x7f6dcbb3e810>
close_stream = True
def finalize(self, close_stream=True):
self.write_rows_on_file()
# In case current_examples < writer_batch_size, but user uses finalize()
if self._check_duplicates:
self.check_duplicate_keys()
# Re-intializing to empty list for next batch
self.hkey_record = []
self.write_examples_on_file()
# If schema is known, infer features even if no examples were written
if self.pa_writer is None and self.schema:
self._build_writer(self.schema)
if self.pa_writer is not None:
self.pa_writer.close()
self.pa_writer = None
if close_stream:
self.stream.close()
else:
if close_stream:
self.stream.close()
> raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:593: SchemaInferenceError
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5426/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7482
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7482/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7482/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7482/events
|
https://github.com/huggingface/datasets/pull/7482
| 2,950,890,368
|
PR_kwDODunzps6QRyY6
| 7,482
|
Implement capability to restore non-nullability in Features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Interestingly, this does not close #7479. The Features are not correctly maintained when calling `from_dict` with the custom Features.",
"Unfortunately this PR does not fix the reported issue. After more digging:\r\n\r\n- when the dataset is created, nullability information is lost in Features;\r\n- even with this PR, it will get lost eventually because of internal copying/recreation of the Features object without accounting for the nullable fields;\r\n- even if that is also fixed, and Features.arrow_schema correctly holds the nullability info, [casting the arrow Table](https://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L677) with a less strict schema to a more strict one (with nullability) will fail (only on deeper structs, not on flat fields). \r\n\r\nInterestingly, passing custom Features does not immediately load the underlying data with the right arrow_schema. Instead, the workflow is like this:\r\n\r\n- load pyarrow table with any of the methods (from_dict, from_pandas, etc.), which will always AUTO INFER rather than use a provided schema\r\n- the loaded table with auto-schema will be used to initialize the `Dataset` class, and only during construction will [CAST](https://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L677) the table to the user-provided schema if needed, if it differs from the auto-inferred one.\r\n\r\nSo I figured, since many/all of the pyarrow [`Table.from_*`](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html) methods have a `schema=` argument, we should already load the Table with the correct schema to begin with. As an example, I tried changing this line:\r\n\r\nhttps://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L940\r\n\r\nto include the arrow_schema, if provided:\r\n\r\n```python\r\npa_table = InMemoryTable.from_pydict(mapping=mapping, schema=features.arrow_schema if features is not None else None)\r\n```\r\n\r\nBut that leads to:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ampere/vanroy/datasets/scratch.py\", line 33, in <module>\r\n ds = Dataset.from_dict(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/local/vanroy/datasets/src/datasets/arrow_dataset.py\", line 957, in from_dict\r\n pa_table = InMemoryTable.from_pydict(mapping=mapping, schema=features.arrow_schema if features is not None else None)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/local/vanroy/datasets/src/datasets/table.py\", line 758, in from_pydict\r\n return cls(pa.Table.from_pydict(*args, **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"pyarrow/table.pxi\", line 1968, in pyarrow.lib._Tabular.from_pydict\r\n File \"pyarrow/table.pxi\", line 6354, in pyarrow.lib._from_pydict\r\n File \"pyarrow/array.pxi\", line 402, in pyarrow.lib.asarray\r\n File \"pyarrow/array.pxi\", line 252, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 114, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/local/vanroy/datasets/src/datasets/arrow_writer.py\", line 201, in __arrow_array__\r\n raise ValueError(\"TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\")\r\nValueError: TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\r\n```\r\n\r\nand I am not too familiar with pyarrow to solve this.\r\n\r\nSo ultimately I'm a bit at a loss here. I *think*, if we'd want to do this right, the automatic casting in init should be removed in favor of handling the logic inside `Dataset.from_*`, by passing the schema explicitly to `pa.Table.from_*(..., schema=schema)`. But I lack the knowledge of pyarrow to go further than what I've written about above.\r\n",
"It's indeed a bit more work to support nullable since in addition to your comments, there are unclear behavior when it comes to concatenating nullable with non-nullable, and maybe how to handle non-nullable lists and nested data.\r\n\r\nBut yup I agree having the `Dataset.from_*` function pass the `schema` to the `pa.Table.from*` would be the way.\r\n\r\nJust one comment about this error: \r\n\r\n```\r\nValueError: TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\r\n```\r\n\r\nThis happens because `Dataset.from_dict` uses `OptimizedTypedSequence` by default, which should only be used if the user doesn't specify a schema"
] | 2025-03-26T22:16:09Z
| 2025-03-27T13:07:50Z
| null |
CONTRIBUTOR
| null | null | null |
This PR attempts to keep track of non_nullable pyarrow fields when converting a `pa.Schema` to `Features`. At the same time, when outputting the `arrow_schema`, the original non-nullable fields are restored. This allows for more consistent behavior and avoids breaking behavior as illustrated in #7479.
I am by no means a pyarrow expert so some logic in `find_non_nullable_fields` may not perfect. Not sure if more logic (type checks) are needed for deep-checking a given schema. Maybe there are other pyarrow structures that need to be covered?
Tests are added, but again, these may not have sufficient coverage in terms of pyarrow structure types.
closes #7479
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7482/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7482/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7482",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7482"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7442
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7442/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7442/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7442/events
|
https://github.com/huggingface/datasets/issues/7442
| 2,905,543,017
|
I_kwDODunzps6tLxFp
| 7,442
|
Flexible Loader
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13894030?v=4",
"events_url": "https://api.github.com/users/dipta007/events{/privacy}",
"followers_url": "https://api.github.com/users/dipta007/followers",
"following_url": "https://api.github.com/users/dipta007/following{/other_user}",
"gists_url": "https://api.github.com/users/dipta007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dipta007",
"id": 13894030,
"login": "dipta007",
"node_id": "MDQ6VXNlcjEzODk0MDMw",
"organizations_url": "https://api.github.com/users/dipta007/orgs",
"received_events_url": "https://api.github.com/users/dipta007/received_events",
"repos_url": "https://api.github.com/users/dipta007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dipta007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipta007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dipta007",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?",
"> Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?\n\nThat would be perfect if not at least a flexible loader.",
"@lhoestq For now, you can use this small utility library: [nanoml](https://pypi.org/project/nanoml/)\n```python\nfrom nanoml.data import load_dataset_flexible\n```\n\nI actively develop and maintain this utility library. Open to contributors. Please open issues, PR, or feature requests."
] | 2025-03-09T16:55:03Z
| 2025-03-27T23:58:17Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
else:
return load_dataset(path_or_name)
```
### Motivation
This can be done inside the user codebase, too, but in my experience, it becomes repetitive code.
### Your contribution
I can open a pull request.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7442/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7442/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5632
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5632/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5632/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5632/events
|
https://github.com/huggingface/datasets/issues/5632
| 1,621,177,391
|
I_kwDODunzps5goTQv
| 5,632
|
Dataset cannot convert too large dictionnary
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/108518627?v=4",
"events_url": "https://api.github.com/users/MaraLac/events{/privacy}",
"followers_url": "https://api.github.com/users/MaraLac/followers",
"following_url": "https://api.github.com/users/MaraLac/following{/other_user}",
"gists_url": "https://api.github.com/users/MaraLac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MaraLac",
"id": 108518627,
"login": "MaraLac",
"node_id": "U_kgDOBnfc4w",
"organizations_url": "https://api.github.com/users/MaraLac/orgs",
"received_events_url": "https://api.github.com/users/MaraLac/received_events",
"repos_url": "https://api.github.com/users/MaraLac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MaraLac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaraLac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MaraLac",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Answered on the forum:\r\n\r\n> To fix the overflow error, we need to merge [support LargeListArray in pyarrow by xwwwwww · Pull Request #4800 · huggingface/datasets · GitHub](https://github.com/huggingface/datasets/pull/4800), which adds support for the large lists. However, before merging it, we need to come up with a cleaner API for large lists. I hope to find some time to address this before Datasets 3.0."
] | 2023-03-13T10:14:40Z
| 2023-03-16T15:28:57Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of my dictionnary, and then I have the error "OverflowError: Python int too large to convert to C long".
Do you know how to solve this problem?
Unfortunately I cannot give a reproductible code because I cannot share a so large file, but you can find the code below (it's a test on only a part of the validation data ~10Go, but it's already the case).
Thank you!
### Steps to reproduce the bug
SAVE_DIR = './data/'
features = h5py.File(SAVE_DIR+'features.hdf5','r')
valid_data = features["validation"]["data/features"]
v_array_values = [np.float32(item[()]) for item in valid_data.values()]
for i in range(len(v_array_values)):
v_array_values[i] = v_array_values[i].round(decimals=5)
dict_valid = datasets.Dataset.from_dict({'input_values': v_array_values})
### Expected behavior
The code is expected to give me a Huggingface dataset.
### Environment info
python: 3.8.15
numpy: 1.22.3
datasets: 2.3.2
pyarrow: 8.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5632/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5632/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5903
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5903/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5903/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5903/events
|
https://github.com/huggingface/datasets/pull/5903
| 1,727,372,549
|
PR_kwDODunzps5RbV82
| 5,903
|
Relax `ci.yml` trigger for `pull_request` based on modified paths
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Also this could be extended to the rest of the GitHub Action `yml` files, so let me know whether you want me to have a look into it! 🤗",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5903). All of your documentation changes will be reflected on that endpoint.",
"Maybe we can add\r\n```python\r\npaths-ignore:\r\n - \"docs/**\"\r\n```\r\nto `ci.yml` and `benchmarks.yml`. The other supporting files are not modified often, so leaving them out is fine."
] | 2023-05-26T10:46:52Z
| 2023-09-07T15:52:36Z
| null |
MEMBER
| null | null | null |
## What's in this PR?
As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed.
## What's pending in this PR?
I would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5903/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5903/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5903",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5903"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7246
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7246/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7246/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7246/events
|
https://github.com/huggingface/datasets/pull/7246
| 2,605,734,447
|
PR_kwDODunzps5_ehPi
| 7,246
|
Set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7246). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-10-22T15:04:47Z
| 2024-10-22T15:07:31Z
| 2024-10-22T15:04:58Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7246/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7246/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7246.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7246",
"merged_at": "2024-10-22T15:04:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7246.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7246"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7045
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7045/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7045/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7045/events
|
https://github.com/huggingface/datasets/pull/7045
| 2,405,447,858
|
PR_kwDODunzps51Nsie
| 7,045
|
Fix tensorflow min version depending on Python version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7045). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005426 / 0.011353 (-0.005927) | 0.003896 / 0.011008 (-0.007112) | 0.063492 / 0.038508 (0.024984) | 0.030199 / 0.023109 (0.007090) | 0.249892 / 0.275898 (-0.026006) | 0.291311 / 0.323480 (-0.032168) | 0.004389 / 0.007986 (-0.003597) | 0.002829 / 0.004328 (-0.001500) | 0.049685 / 0.004250 (0.045435) | 0.043351 / 0.037052 (0.006299) | 0.264265 / 0.258489 (0.005776) | 0.290463 / 0.293841 (-0.003378) | 0.030007 / 0.128546 (-0.098539) | 0.012146 / 0.075646 (-0.063500) | 0.203841 / 0.419271 (-0.215430) | 0.037159 / 0.043533 (-0.006373) | 0.253377 / 0.255139 (-0.001762) | 0.275990 / 0.283200 (-0.007209) | 0.018334 / 0.141683 (-0.123349) | 1.112616 / 1.452155 (-0.339539) | 1.157507 / 1.492716 (-0.335209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097781 / 0.018006 (0.079775) | 0.314381 / 0.000490 (0.313891) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018704 / 0.037411 (-0.018708) | 0.062293 / 0.014526 (0.047767) | 0.073997 / 0.176557 (-0.102559) | 0.120309 / 0.737135 (-0.616826) | 0.075592 / 0.296338 (-0.220747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283178 / 0.215209 (0.067969) | 2.798027 / 2.077655 (0.720372) | 1.431320 / 1.504120 (-0.072800) | 1.316135 / 1.541195 (-0.225060) | 1.345528 / 1.468490 (-0.122962) | 0.717300 / 4.584777 (-3.867477) | 2.401019 / 3.745712 (-1.344693) | 2.866411 / 5.269862 (-2.403451) | 1.933198 / 4.565676 (-2.632479) | 0.079505 / 0.424275 (-0.344771) | 0.005089 / 0.007607 (-0.002519) | 0.333614 / 0.226044 (0.107569) | 3.315449 / 2.268929 (1.046520) | 1.807667 / 55.444624 (-53.636957) | 1.490537 / 6.876477 (-5.385939) | 1.633305 / 2.142072 (-0.508767) | 0.807732 / 4.805227 (-3.997495) | 0.133825 / 6.500664 (-6.366839) | 0.041696 / 0.075469 (-0.033774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969063 / 1.841788 (-0.872724) | 11.825985 / 8.074308 (3.751677) | 9.808041 / 10.191392 (-0.383351) | 0.143338 / 0.680424 (-0.537085) | 0.014714 / 0.534201 (-0.519487) | 0.304360 / 0.579283 (-0.274923) | 0.266863 / 0.434364 (-0.167501) | 0.342374 / 0.540337 (-0.197963) | 0.442120 / 1.386936 (-0.944816) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005574 / 0.011353 (-0.005778) | 0.003735 / 0.011008 (-0.007273) | 0.051021 / 0.038508 (0.012513) | 0.032825 / 0.023109 (0.009716) | 0.267775 / 0.275898 (-0.008123) | 0.286015 / 0.323480 (-0.037464) | 0.004332 / 0.007986 (-0.003653) | 0.002796 / 0.004328 (-0.001532) | 0.050183 / 0.004250 (0.045933) | 0.040191 / 0.037052 (0.003138) | 0.279777 / 0.258489 (0.021288) | 0.312161 / 0.293841 (0.018320) | 0.031993 / 0.128546 (-0.096553) | 0.012168 / 0.075646 (-0.063478) | 0.061622 / 0.419271 (-0.357650) | 0.033577 / 0.043533 (-0.009956) | 0.267300 / 0.255139 (0.012161) | 0.284595 / 0.283200 (0.001396) | 0.018476 / 0.141683 (-0.123207) | 1.135917 / 1.452155 (-0.316237) | 1.164516 / 1.492716 (-0.328200) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.108194 / 0.018006 (0.090188) | 0.309514 / 0.000490 (0.309025) | 0.000211 / 0.000200 (0.000011) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022998 / 0.037411 (-0.014413) | 0.077126 / 0.014526 (0.062600) | 0.088779 / 0.176557 (-0.087778) | 0.128646 / 0.737135 (-0.608489) | 0.089895 / 0.296338 (-0.206443) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295131 / 0.215209 (0.079922) | 2.887380 / 2.077655 (0.809726) | 1.586450 / 1.504120 (0.082330) | 1.449831 / 1.541195 (-0.091363) | 1.468805 / 1.468490 (0.000315) | 0.721578 / 4.584777 (-3.863199) | 0.970499 / 3.745712 (-2.775214) | 2.975604 / 5.269862 (-2.294258) | 1.935809 / 4.565676 (-2.629867) | 0.078504 / 0.424275 (-0.345771) | 0.005219 / 0.007607 (-0.002388) | 0.347168 / 0.226044 (0.121124) | 3.417040 / 2.268929 (1.148111) | 1.928707 / 55.444624 (-53.515917) | 1.629398 / 6.876477 (-5.247078) | 1.653014 / 2.142072 (-0.489058) | 0.796097 / 4.805227 (-4.009130) | 0.133956 / 6.500664 (-6.366708) | 0.041567 / 0.075469 (-0.033902) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.995511 / 1.841788 (-0.846277) | 12.577211 / 8.074308 (4.502903) | 10.562561 / 10.191392 (0.371169) | 0.144288 / 0.680424 (-0.536136) | 0.016345 / 0.534201 (-0.517856) | 0.304364 / 0.579283 (-0.274920) | 0.134630 / 0.434364 (-0.299734) | 0.341494 / 0.540337 (-0.198843) | 0.436238 / 1.386936 (-0.950698) |\n\n</details>\n</details>\n\n\n"
] | 2024-07-12T12:20:23Z
| 2024-07-12T12:38:53Z
| 2024-07-12T12:33:00Z
|
MEMBER
| null | null | null |
Fix tensorflow min version depending on Python version.
Related to:
- #6991
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7045/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7045/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7045",
"merged_at": "2024-07-12T12:33:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7045"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6873
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6873/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6873/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6873/events
|
https://github.com/huggingface/datasets/pull/6873
| 2,280,463,182
|
PR_kwDODunzps5unXnq
| 6,873
|
Set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6873). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005301 / 0.011353 (-0.006052) | 0.003633 / 0.011008 (-0.007375) | 0.063414 / 0.038508 (0.024906) | 0.042406 / 0.023109 (0.019297) | 0.253414 / 0.275898 (-0.022484) | 0.276811 / 0.323480 (-0.046668) | 0.003148 / 0.007986 (-0.004837) | 0.002614 / 0.004328 (-0.001715) | 0.049208 / 0.004250 (0.044958) | 0.045819 / 0.037052 (0.008767) | 0.268027 / 0.258489 (0.009538) | 0.298821 / 0.293841 (0.004980) | 0.028460 / 0.128546 (-0.100086) | 0.010671 / 0.075646 (-0.064975) | 0.208602 / 0.419271 (-0.210669) | 0.036057 / 0.043533 (-0.007476) | 0.256079 / 0.255139 (0.000940) | 0.277040 / 0.283200 (-0.006160) | 0.019018 / 0.141683 (-0.122665) | 1.147070 / 1.452155 (-0.305085) | 1.175838 / 1.492716 (-0.316878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092216 / 0.018006 (0.074210) | 0.304774 / 0.000490 (0.304284) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018242 / 0.037411 (-0.019170) | 0.061088 / 0.014526 (0.046562) | 0.074517 / 0.176557 (-0.102039) | 0.120444 / 0.737135 (-0.616691) | 0.074628 / 0.296338 (-0.221710) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283914 / 0.215209 (0.068705) | 2.859123 / 2.077655 (0.781469) | 1.495152 / 1.504120 (-0.008967) | 1.395514 / 1.541195 (-0.145681) | 1.454076 / 1.468490 (-0.014414) | 0.568758 / 4.584777 (-4.016019) | 2.461304 / 3.745712 (-1.284408) | 2.836192 / 5.269862 (-2.433670) | 1.815463 / 4.565676 (-2.750213) | 0.065762 / 0.424275 (-0.358513) | 0.006872 / 0.007607 (-0.000736) | 0.339304 / 0.226044 (0.113260) | 3.326544 / 2.268929 (1.057616) | 1.847970 / 55.444624 (-53.596654) | 1.572667 / 6.876477 (-5.303809) | 1.595717 / 2.142072 (-0.546355) | 0.644196 / 4.805227 (-4.161031) | 0.120320 / 6.500664 (-6.380344) | 0.043334 / 0.075469 (-0.032135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965807 / 1.841788 (-0.875981) | 11.628715 / 8.074308 (3.554406) | 9.485618 / 10.191392 (-0.705774) | 0.152387 / 0.680424 (-0.528037) | 0.013852 / 0.534201 (-0.520349) | 0.285833 / 0.579283 (-0.293450) | 0.263692 / 0.434364 (-0.170672) | 0.323086 / 0.540337 (-0.217251) | 0.418178 / 1.386936 (-0.968758) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005505 / 0.011353 (-0.005848) | 0.003630 / 0.011008 (-0.007378) | 0.049780 / 0.038508 (0.011272) | 0.030469 / 0.023109 (0.007359) | 0.270052 / 0.275898 (-0.005846) | 0.294370 / 0.323480 (-0.029110) | 0.004207 / 0.007986 (-0.003779) | 0.002720 / 0.004328 (-0.001609) | 0.048952 / 0.004250 (0.044701) | 0.041006 / 0.037052 (0.003953) | 0.281585 / 0.258489 (0.023096) | 0.310600 / 0.293841 (0.016759) | 0.029457 / 0.128546 (-0.099089) | 0.010508 / 0.075646 (-0.065138) | 0.058090 / 0.419271 (-0.361181) | 0.032814 / 0.043533 (-0.010718) | 0.272755 / 0.255139 (0.017616) | 0.292154 / 0.283200 (0.008954) | 0.018312 / 0.141683 (-0.123371) | 1.177199 / 1.452155 (-0.274955) | 1.238803 / 1.492716 (-0.253913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093889 / 0.018006 (0.075883) | 0.303054 / 0.000490 (0.302564) | 0.000204 / 0.000200 (0.000004) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022556 / 0.037411 (-0.014856) | 0.075951 / 0.014526 (0.061425) | 0.086824 / 0.176557 (-0.089732) | 0.128091 / 0.737135 (-0.609044) | 0.088146 / 0.296338 (-0.208192) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292563 / 0.215209 (0.077354) | 2.882656 / 2.077655 (0.805001) | 1.559814 / 1.504120 (0.055695) | 1.443760 / 1.541195 (-0.097435) | 1.460967 / 1.468490 (-0.007523) | 0.567812 / 4.584777 (-4.016965) | 0.964407 / 3.745712 (-2.781305) | 2.819782 / 5.269862 (-2.450079) | 1.733334 / 4.565676 (-2.832343) | 0.064745 / 0.424275 (-0.359530) | 0.005178 / 0.007607 (-0.002429) | 0.345322 / 0.226044 (0.119278) | 3.407204 / 2.268929 (1.138275) | 1.919337 / 55.444624 (-53.525288) | 1.643463 / 6.876477 (-5.233013) | 1.682191 / 2.142072 (-0.459881) | 0.639432 / 4.805227 (-4.165795) | 0.115659 / 6.500664 (-6.385005) | 0.041202 / 0.075469 (-0.034267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004664 / 1.841788 (-0.837123) | 12.043460 / 8.074308 (3.969152) | 9.856431 / 10.191392 (-0.334961) | 0.131351 / 0.680424 (-0.549072) | 0.015800 / 0.534201 (-0.518401) | 0.288211 / 0.579283 (-0.291072) | 0.126065 / 0.434364 (-0.308298) | 0.386494 / 0.540337 (-0.153843) | 0.424203 / 1.386936 (-0.962733) |\n\n</details>\n</details>\n\n\n"
] | 2024-05-06T09:43:18Z
| 2024-05-06T10:03:19Z
| 2024-05-06T09:57:12Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6873/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6873/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6873",
"merged_at": "2024-05-06T09:57:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6873"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7057
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7057/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7057/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7057/events
|
https://github.com/huggingface/datasets/pull/7057
| 2,422,498,520
|
PR_kwDODunzps52EjGC
| 7,057
|
Update load_hub.mdx
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7057). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005617 / 0.011353 (-0.005736) | 0.003994 / 0.011008 (-0.007014) | 0.064188 / 0.038508 (0.025680) | 0.030939 / 0.023109 (0.007829) | 0.248712 / 0.275898 (-0.027186) | 0.273417 / 0.323480 (-0.050063) | 0.003340 / 0.007986 (-0.004646) | 0.002823 / 0.004328 (-0.001506) | 0.049985 / 0.004250 (0.045734) | 0.046872 / 0.037052 (0.009820) | 0.254554 / 0.258489 (-0.003935) | 0.288142 / 0.293841 (-0.005699) | 0.030540 / 0.128546 (-0.098006) | 0.012295 / 0.075646 (-0.063352) | 0.204589 / 0.419271 (-0.214683) | 0.036383 / 0.043533 (-0.007150) | 0.254277 / 0.255139 (-0.000862) | 0.267962 / 0.283200 (-0.015237) | 0.021173 / 0.141683 (-0.120510) | 1.126933 / 1.452155 (-0.325221) | 1.190841 / 1.492716 (-0.301875) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093622 / 0.018006 (0.075616) | 0.297967 / 0.000490 (0.297477) | 0.000241 / 0.000200 (0.000041) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018623 / 0.037411 (-0.018789) | 0.062210 / 0.014526 (0.047684) | 0.074369 / 0.176557 (-0.102187) | 0.120585 / 0.737135 (-0.616550) | 0.075966 / 0.296338 (-0.220372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285440 / 0.215209 (0.070231) | 2.804275 / 2.077655 (0.726620) | 1.484539 / 1.504120 (-0.019580) | 1.366587 / 1.541195 (-0.174607) | 1.355269 / 1.468490 (-0.113221) | 0.722289 / 4.584777 (-3.862488) | 2.344567 / 3.745712 (-1.401145) | 2.831779 / 5.269862 (-2.438083) | 1.899800 / 4.565676 (-2.665876) | 0.078657 / 0.424275 (-0.345619) | 0.005188 / 0.007607 (-0.002420) | 0.340150 / 0.226044 (0.114106) | 3.390915 / 2.268929 (1.121986) | 1.836473 / 55.444624 (-53.608152) | 1.520718 / 6.876477 (-5.355759) | 1.723448 / 2.142072 (-0.418624) | 0.810281 / 4.805227 (-3.994946) | 0.136008 / 6.500664 (-6.364657) | 0.044005 / 0.075469 (-0.031465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989982 / 1.841788 (-0.851806) | 11.671075 / 8.074308 (3.596767) | 9.805471 / 10.191392 (-0.385921) | 0.141637 / 0.680424 (-0.538787) | 0.014551 / 0.534201 (-0.519650) | 0.310077 / 0.579283 (-0.269206) | 0.266838 / 0.434364 (-0.167526) | 0.348894 / 0.540337 (-0.191444) | 0.451530 / 1.386936 (-0.935406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005639 / 0.011353 (-0.005713) | 0.003935 / 0.011008 (-0.007074) | 0.050147 / 0.038508 (0.011639) | 0.031023 / 0.023109 (0.007914) | 0.268361 / 0.275898 (-0.007537) | 0.295774 / 0.323480 (-0.027706) | 0.005029 / 0.007986 (-0.002956) | 0.002832 / 0.004328 (-0.001496) | 0.049806 / 0.004250 (0.045556) | 0.040515 / 0.037052 (0.003463) | 0.283298 / 0.258489 (0.024809) | 0.321946 / 0.293841 (0.028105) | 0.031833 / 0.128546 (-0.096714) | 0.012137 / 0.075646 (-0.063510) | 0.060510 / 0.419271 (-0.358761) | 0.033754 / 0.043533 (-0.009779) | 0.268079 / 0.255139 (0.012940) | 0.292468 / 0.283200 (0.009268) | 0.017268 / 0.141683 (-0.124414) | 1.159922 / 1.452155 (-0.292233) | 1.188961 / 1.492716 (-0.303755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096930 / 0.018006 (0.078923) | 0.306921 / 0.000490 (0.306431) | 0.000226 / 0.000200 (0.000026) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022811 / 0.037411 (-0.014600) | 0.077298 / 0.014526 (0.062772) | 0.088949 / 0.176557 (-0.087608) | 0.130763 / 0.737135 (-0.606372) | 0.090429 / 0.296338 (-0.205909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300866 / 0.215209 (0.085657) | 2.963375 / 2.077655 (0.885720) | 1.595753 / 1.504120 (0.091633) | 1.463091 / 1.541195 (-0.078104) | 1.481182 / 1.468490 (0.012692) | 0.712939 / 4.584777 (-3.871838) | 0.956694 / 3.745712 (-2.789018) | 2.802890 / 5.269862 (-2.466971) | 1.891092 / 4.565676 (-2.674585) | 0.077570 / 0.424275 (-0.346706) | 0.005536 / 0.007607 (-0.002072) | 0.351958 / 0.226044 (0.125914) | 3.459114 / 2.268929 (1.190185) | 1.989488 / 55.444624 (-53.455137) | 1.676271 / 6.876477 (-5.200205) | 1.808073 / 2.142072 (-0.334000) | 0.786920 / 4.805227 (-4.018307) | 0.132220 / 6.500664 (-6.368444) | 0.041602 / 0.075469 (-0.033867) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031759 / 1.841788 (-0.810029) | 12.007776 / 8.074308 (3.933467) | 10.568254 / 10.191392 (0.376862) | 0.143176 / 0.680424 (-0.537248) | 0.015556 / 0.534201 (-0.518645) | 0.304484 / 0.579283 (-0.274799) | 0.125508 / 0.434364 (-0.308855) | 0.340017 / 0.540337 (-0.200320) | 0.434285 / 1.386936 (-0.952651) |\n\n</details>\n</details>\n\n\n"
] | 2024-07-22T10:17:46Z
| 2024-07-22T10:34:14Z
| 2024-07-22T10:28:10Z
|
COLLABORATOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7057/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7057/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7057.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7057",
"merged_at": "2024-07-22T10:28:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7057.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7057"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4728/events
|
https://github.com/huggingface/datasets/issues/4728
| 1,312,897,454
|
I_kwDODunzps5OQTmu
| 4,728
|
load_dataset gives "403" error when using Financial Phrasebank
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2209134?v=4",
"events_url": "https://api.github.com/users/rohitvincent/events{/privacy}",
"followers_url": "https://api.github.com/users/rohitvincent/followers",
"following_url": "https://api.github.com/users/rohitvincent/following{/other_user}",
"gists_url": "https://api.github.com/users/rohitvincent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rohitvincent",
"id": 2209134,
"login": "rohitvincent",
"node_id": "MDQ6VXNlcjIyMDkxMzQ=",
"organizations_url": "https://api.github.com/users/rohitvincent/orgs",
"received_events_url": "https://api.github.com/users/rohitvincent/received_events",
"repos_url": "https://api.github.com/users/rohitvincent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rohitvincent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohitvincent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rohitvincent",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @rohitvincent, thanks for reporting.\r\n\r\nUnfortunately I'm not able to reproduce your issue:\r\n```python\r\nIn [2]: from datasets import load_dataset, DownloadMode\r\n ...: load_dataset(path='financial_phrasebank',name='sentences_allagree', download_mode=\"force_redownload\")\r\nDownloading builder script: 6.04kB [00:00, 2.87MB/s] \r\nDownloading metadata: 13.7kB [00:00, 7.24MB/s] \r\nDownloading and preparing dataset financial_phrasebank/sentences_allagree (download: 665.91 KiB, generated: 296.26 KiB, post-processed: Unknown size, total: 962.17 KiB) to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 682k/682k [00:00<00:00, 7.66MB/s]\r\nDataset financial_phrasebank downloaded and prepared to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 918.80it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label'],\r\n num_rows: 2264\r\n })\r\n})\r\n```\r\n\r\nAre you able to access the link? https://www.researchgate.net/profile/Pekka-Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip",
"Yes was able to download from the link manually. But still, get the same error when I use load_dataset.",
"Fixed once data files are hosted on the Hub:\r\n- #4598"
] | 2022-07-21T08:43:32Z
| 2022-08-04T08:32:35Z
| 2022-08-04T08:32:35Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
I tried both codes below to download the financial phrasebank dataset (https://huggingface.co/datasets/financial_phrasebank) with the sentences_allagree subset. However, the code gives a 403 error when executed from multiple machines locally or on the cloud.
```
from datasets import load_dataset, DownloadMode
load_dataset(path='financial_phrasebank',name='sentences_allagree',download_mode=DownloadMode.FORCE_REDOWNLOAD)
```
```
from datasets import load_dataset, DownloadMode
load_dataset(path='financial_phrasebank',name='sentences_allagree')
```
**Error**
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4728/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4728/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6098
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6098/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6098/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6098/events
|
https://github.com/huggingface/datasets/pull/6098
| 1,827,655,071
|
PR_kwDODunzps5WuCn1
| 6,098
|
Expanduser in save_to_disk()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51715864?v=4",
"events_url": "https://api.github.com/users/Unknown3141592/events{/privacy}",
"followers_url": "https://api.github.com/users/Unknown3141592/followers",
"following_url": "https://api.github.com/users/Unknown3141592/following{/other_user}",
"gists_url": "https://api.github.com/users/Unknown3141592/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Unknown3141592",
"id": 51715864,
"login": "Unknown3141592",
"node_id": "MDQ6VXNlcjUxNzE1ODY0",
"organizations_url": "https://api.github.com/users/Unknown3141592/orgs",
"received_events_url": "https://api.github.com/users/Unknown3141592/received_events",
"repos_url": "https://api.github.com/users/Unknown3141592/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Unknown3141592/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Unknown3141592/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Unknown3141592",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I am not sure why the case distinction between local and remote filesystems is even necessary for DatasetDict when saving to disk. Imo this could be removed (leaving only fs.makedirs(dataset_dict_path, exist_ok=True)).\r\n\r\nIndeed. But it's better to address this in a separate PR.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007696 / 0.011353 (-0.003656) | 0.004497 / 0.011008 (-0.006511) | 0.099302 / 0.038508 (0.060794) | 0.083360 / 0.023109 (0.060251) | 0.393483 / 0.275898 (0.117585) | 0.450505 / 0.323480 (0.127025) | 0.004610 / 0.007986 (-0.003376) | 0.003637 / 0.004328 (-0.000692) | 0.075752 / 0.004250 (0.071501) | 0.064034 / 0.037052 (0.026982) | 0.397785 / 0.258489 (0.139296) | 0.462948 / 0.293841 (0.169107) | 0.035902 / 0.128546 (-0.092644) | 0.009640 / 0.075646 (-0.066007) | 0.342299 / 0.419271 (-0.076973) | 0.059586 / 0.043533 (0.016053) | 0.404918 / 0.255139 (0.149779) | 0.440889 / 0.283200 (0.157690) | 0.028981 / 0.141683 (-0.112702) | 1.775380 / 1.452155 (0.323226) | 1.866663 / 1.492716 (0.373946) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249080 / 0.018006 (0.231074) | 0.456460 / 0.000490 (0.455970) | 0.028145 / 0.000200 (0.027945) | 0.000402 / 0.000054 (0.000347) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030373 / 0.037411 (-0.007038) | 0.088562 / 0.014526 (0.074036) | 0.122837 / 0.176557 (-0.053720) | 0.167122 / 0.737135 (-0.570014) | 0.103953 / 0.296338 (-0.192385) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431714 / 0.215209 (0.216505) | 4.182224 / 2.077655 (2.104570) | 2.025650 / 1.504120 (0.521530) | 1.838905 / 1.541195 (0.297710) | 1.868710 / 1.468490 (0.400219) | 0.538422 / 4.584777 (-4.046355) | 4.038941 / 3.745712 (0.293228) | 3.717695 / 5.269862 (-1.552166) | 2.313197 / 4.565676 (-2.252479) | 0.061060 / 0.424275 (-0.363215) | 0.008248 / 0.007607 (0.000641) | 0.497438 / 0.226044 (0.271394) | 4.946663 / 2.268929 (2.677734) | 2.571841 / 55.444624 (-52.872784) | 2.155894 / 6.876477 (-4.720583) | 2.183180 / 2.142072 (0.041107) | 0.639810 / 4.805227 (-4.165417) | 0.153273 / 6.500664 (-6.347391) | 0.068606 / 0.075469 (-0.006863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.376152 / 1.841788 (-0.465635) | 20.747088 / 8.074308 (12.672780) | 15.200311 / 10.191392 (5.008919) | 0.166380 / 0.680424 (-0.514043) | 0.021417 / 0.534201 (-0.512784) | 0.435677 / 0.579283 (-0.143606) | 0.460412 / 0.434364 (0.026048) | 0.509978 / 0.540337 (-0.030359) | 0.702506 / 1.386936 (-0.684430) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007378 / 0.011353 (-0.003975) | 0.003938 / 0.011008 (-0.007070) | 0.067095 / 0.038508 (0.028587) | 0.082252 / 0.023109 (0.059143) | 0.420317 / 0.275898 (0.144419) | 0.477496 / 0.323480 (0.154017) | 0.006259 / 0.007986 (-0.001727) | 0.003513 / 0.004328 (-0.000816) | 0.072107 / 0.004250 (0.067856) | 0.061737 / 0.037052 (0.024684) | 0.444142 / 0.258489 (0.185653) | 0.488926 / 0.293841 (0.195085) | 0.033623 / 0.128546 (-0.094923) | 0.008091 / 0.075646 (-0.067555) | 0.073997 / 0.419271 (-0.345274) | 0.051295 / 0.043533 (0.007762) | 0.442551 / 0.255139 (0.187412) | 0.462713 / 0.283200 (0.179513) | 0.023115 / 0.141683 (-0.118568) | 1.645759 / 1.452155 (0.193604) | 1.758121 / 1.492716 (0.265405) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233450 / 0.018006 (0.215444) | 0.445384 / 0.000490 (0.444894) | 0.006412 / 0.000200 (0.006212) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032446 / 0.037411 (-0.004965) | 0.098515 / 0.014526 (0.083989) | 0.109095 / 0.176557 (-0.067462) | 0.167645 / 0.737135 (-0.569490) | 0.110403 / 0.296338 (-0.185936) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470189 / 0.215209 (0.254980) | 4.663224 / 2.077655 (2.585569) | 2.504474 / 1.504120 (1.000354) | 2.282867 / 1.541195 (0.741673) | 2.331598 / 1.468490 (0.863108) | 0.554421 / 4.584777 (-4.030356) | 4.078657 / 3.745712 (0.332945) | 3.516339 / 5.269862 (-1.753523) | 2.239134 / 4.565676 (-2.326542) | 0.062690 / 0.424275 (-0.361585) | 0.008406 / 0.007607 (0.000799) | 0.533827 / 0.226044 (0.307782) | 5.423984 / 2.268929 (3.155055) | 2.972784 / 55.444624 (-52.471840) | 2.699056 / 6.876477 (-4.177421) | 2.844403 / 2.142072 (0.702331) | 0.639194 / 4.805227 (-4.166033) | 0.142097 / 6.500664 (-6.358567) | 0.064646 / 0.075469 (-0.010823) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544640 / 1.841788 (-0.297148) | 21.453429 / 8.074308 (13.379121) | 15.610723 / 10.191392 (5.419331) | 0.207796 / 0.680424 (-0.472628) | 0.021912 / 0.534201 (-0.512289) | 0.430472 / 0.579283 (-0.148811) | 0.467530 / 0.434364 (0.033166) | 0.541339 / 0.540337 (0.001002) | 0.721976 / 1.386936 (-0.664960) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-29T20:50:45Z
| 2023-10-27T14:14:11Z
| 2023-10-27T14:04:36Z
|
CONTRIBUTOR
| null | null | null |
Fixes #5651. The same problem occurs when loading from disk so I fixed it there too.
I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6098/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6098/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6098.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6098",
"merged_at": "2023-10-27T14:04:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6098.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6098"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6387
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6387/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6387/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6387/events
|
https://github.com/huggingface/datasets/issues/6387
| 1,980,224,020
|
I_kwDODunzps52B9IU
| 6,387
|
How to load existing downloaded dataset ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"events_url": "https://api.github.com/users/liming-ai/events{/privacy}",
"followers_url": "https://api.github.com/users/liming-ai/followers",
"following_url": "https://api.github.com/users/liming-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/liming-ai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liming-ai",
"id": 73068772,
"login": "liming-ai",
"node_id": "MDQ6VXNlcjczMDY4Nzcy",
"organizations_url": "https://api.github.com/users/liming-ai/orgs",
"received_events_url": "https://api.github.com/users/liming-ai/received_events",
"repos_url": "https://api.github.com/users/liming-ai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liming-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liming-ai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liming-ai",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Feel free to use `dataset.save_to_disk(...)`, then scp the directory containing the saved dataset and reload it on your other machine using `dataset = load_from_disk(...)`"
] | 2023-11-06T22:51:44Z
| 2023-11-16T18:07:01Z
| 2023-11-16T18:07:01Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
-data
|-data_name
|-test-00000-of-00001-bf4c733542e35fcb.parquet
|-train-00000-of-00001-2a1df75c6bce91ab.parquet
```
Then I use SCP to clone this dataset into another machine, and then try:
```
from datasets import load_dataset
dataset = load_dataset('data/data_name') # load from local path
```
This leads to re-generating training and validation split for each time, and the disk quota will be duplicated occupation.
How can I just load the dataset without generating and saving these splits again?
### Motivation
I do not want to download the same dataset in two machines, scp is much faster and better than HuggingFace API. I hope we can directly load the downloaded datasets (.parquest)
### Your contribution
Please refer to the feature
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"events_url": "https://api.github.com/users/liming-ai/events{/privacy}",
"followers_url": "https://api.github.com/users/liming-ai/followers",
"following_url": "https://api.github.com/users/liming-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/liming-ai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liming-ai",
"id": 73068772,
"login": "liming-ai",
"node_id": "MDQ6VXNlcjczMDY4Nzcy",
"organizations_url": "https://api.github.com/users/liming-ai/orgs",
"received_events_url": "https://api.github.com/users/liming-ai/received_events",
"repos_url": "https://api.github.com/users/liming-ai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liming-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liming-ai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liming-ai",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6387/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6387/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7310
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7310/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7310/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7310/events
|
https://github.com/huggingface/datasets/issues/7310
| 2,724,830,603
|
I_kwDODunzps6iaZ2L
| 7,310
|
Enable the Audio Feature to decode / read with an offset + duration
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4",
"events_url": "https://api.github.com/users/TParcollet/events{/privacy}",
"followers_url": "https://api.github.com/users/TParcollet/followers",
"following_url": "https://api.github.com/users/TParcollet/following{/other_user}",
"gists_url": "https://api.github.com/users/TParcollet/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TParcollet",
"id": 11910731,
"login": "TParcollet",
"node_id": "MDQ6VXNlcjExOTEwNzMx",
"organizations_url": "https://api.github.com/users/TParcollet/orgs",
"received_events_url": "https://api.github.com/users/TParcollet/received_events",
"repos_url": "https://api.github.com/users/TParcollet/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TParcollet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TParcollet/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TParcollet",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! What about having audio + start + duration columns and enable something like this ?\r\n\r\n```python\r\nfor example in ds:\r\n array = example[\"audio\"].read(start=example[\"start\"], frames=example[\"duration\"])\r\n```",
"Hi @lhoestq, this would work with a file-based dataset but would be terrible for a sharded one as it would duplicate the large audio file many times. Also, very long audio files are not embedded very well in the parquet file, even with large_binary(). It crashed a few times for me until I switched to one sample == one file :-( "
] | 2024-12-07T22:01:44Z
| 2024-12-09T21:09:46Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in the dataset row).
### Motivation
I am currently generating a fairly big dataset to .parquet(). Unfortunately, it does not work because all existing functions load the whole .wav file corresponding to the row. All my attempts at bypassing this failed. We should be able to put in the Table only the bytes corresponding to what soundfile reads with an offset (and subset of the audio file).
### Your contribution
I can totally test whatever code on my large dataset creation script.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7310/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7310/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6518
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6518/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6518/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6518/events
|
https://github.com/huggingface/datasets/pull/6518
| 2,050,137,038
|
PR_kwDODunzps5icu-W
| 6,518
|
fix get_metadata_patterns function args error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6518). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"hello!\r\n@albertvillanova \r\nThank you very much for your recognition。\r\nWhen can this PR be merged?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005205 / 0.011353 (-0.006148) | 0.003730 / 0.011008 (-0.007278) | 0.063195 / 0.038508 (0.024687) | 0.052329 / 0.023109 (0.029219) | 0.247299 / 0.275898 (-0.028599) | 0.269600 / 0.323480 (-0.053880) | 0.004801 / 0.007986 (-0.003185) | 0.002728 / 0.004328 (-0.001600) | 0.049195 / 0.004250 (0.044944) | 0.044859 / 0.037052 (0.007807) | 0.253047 / 0.258489 (-0.005442) | 0.277253 / 0.293841 (-0.016588) | 0.028370 / 0.128546 (-0.100176) | 0.011095 / 0.075646 (-0.064551) | 0.211090 / 0.419271 (-0.208182) | 0.035944 / 0.043533 (-0.007589) | 0.252755 / 0.255139 (-0.002384) | 0.269466 / 0.283200 (-0.013733) | 0.017514 / 0.141683 (-0.124169) | 1.107815 / 1.452155 (-0.344339) | 1.154989 / 1.492716 (-0.337728) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093925 / 0.018006 (0.075919) | 0.300923 / 0.000490 (0.300433) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018268 / 0.037411 (-0.019143) | 0.060508 / 0.014526 (0.045983) | 0.074564 / 0.176557 (-0.101992) | 0.121523 / 0.737135 (-0.615612) | 0.077394 / 0.296338 (-0.218945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275859 / 0.215209 (0.060650) | 2.707593 / 2.077655 (0.629938) | 1.419178 / 1.504120 (-0.084942) | 1.286737 / 1.541195 (-0.254458) | 1.350504 / 1.468490 (-0.117986) | 0.570461 / 4.584777 (-4.014316) | 2.400795 / 3.745712 (-1.344917) | 2.840876 / 5.269862 (-2.428986) | 1.724044 / 4.565676 (-2.841633) | 0.063819 / 0.424275 (-0.360456) | 0.004961 / 0.007607 (-0.002647) | 0.342537 / 0.226044 (0.116492) | 3.370942 / 2.268929 (1.102013) | 1.788659 / 55.444624 (-53.655966) | 1.501921 / 6.876477 (-5.374556) | 1.535352 / 2.142072 (-0.606721) | 0.651838 / 4.805227 (-4.153390) | 0.118979 / 6.500664 (-6.381685) | 0.047796 / 0.075469 (-0.027673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949850 / 1.841788 (-0.891937) | 11.581988 / 8.074308 (3.507680) | 10.462837 / 10.191392 (0.271445) | 0.133298 / 0.680424 (-0.547125) | 0.015008 / 0.534201 (-0.519193) | 0.299265 / 0.579283 (-0.280018) | 0.268864 / 0.434364 (-0.165500) | 0.332888 / 0.540337 (-0.207450) | 0.420423 / 1.386936 (-0.966513) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005309 / 0.011353 (-0.006044) | 0.003628 / 0.011008 (-0.007380) | 0.049545 / 0.038508 (0.011036) | 0.054095 / 0.023109 (0.030985) | 0.270679 / 0.275898 (-0.005219) | 0.295744 / 0.323480 (-0.027736) | 0.004131 / 0.007986 (-0.003855) | 0.002732 / 0.004328 (-0.001596) | 0.048714 / 0.004250 (0.044464) | 0.039916 / 0.037052 (0.002863) | 0.272354 / 0.258489 (0.013865) | 0.310553 / 0.293841 (0.016712) | 0.029525 / 0.128546 (-0.099021) | 0.011322 / 0.075646 (-0.064324) | 0.058007 / 0.419271 (-0.361265) | 0.032883 / 0.043533 (-0.010650) | 0.273609 / 0.255139 (0.018470) | 0.291780 / 0.283200 (0.008581) | 0.020538 / 0.141683 (-0.121145) | 1.118031 / 1.452155 (-0.334123) | 1.160777 / 1.492716 (-0.331940) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092966 / 0.018006 (0.074959) | 0.301432 / 0.000490 (0.300943) | 0.000225 / 0.000200 (0.000025) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022736 / 0.037411 (-0.014676) | 0.077655 / 0.014526 (0.063129) | 0.093386 / 0.176557 (-0.083171) | 0.129694 / 0.737135 (-0.607441) | 0.092790 / 0.296338 (-0.203548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299161 / 0.215209 (0.083952) | 2.923300 / 2.077655 (0.845645) | 1.629661 / 1.504120 (0.125541) | 1.510797 / 1.541195 (-0.030398) | 1.507269 / 1.468490 (0.038778) | 0.574346 / 4.584777 (-4.010431) | 2.454396 / 3.745712 (-1.291316) | 2.843402 / 5.269862 (-2.426460) | 1.774815 / 4.565676 (-2.790861) | 0.063601 / 0.424275 (-0.360674) | 0.004977 / 0.007607 (-0.002630) | 0.347693 / 0.226044 (0.121649) | 3.430054 / 2.268929 (1.161126) | 1.987308 / 55.444624 (-53.457316) | 1.682756 / 6.876477 (-5.193721) | 1.688463 / 2.142072 (-0.453609) | 0.646449 / 4.805227 (-4.158778) | 0.117860 / 6.500664 (-6.382804) | 0.041305 / 0.075469 (-0.034164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987355 / 1.841788 (-0.854433) | 12.398721 / 8.074308 (4.324412) | 11.070442 / 10.191392 (0.879050) | 0.134946 / 0.680424 (-0.545477) | 0.016172 / 0.534201 (-0.518029) | 0.293359 / 0.579283 (-0.285924) | 0.282271 / 0.434364 (-0.152093) | 0.331919 / 0.540337 (-0.208418) | 0.432137 / 1.386936 (-0.954799) |\n\n</details>\n</details>\n\n\n"
] | 2023-12-20T09:06:22Z
| 2023-12-21T15:14:17Z
| 2023-12-21T15:07:57Z
|
CONTRIBUTOR
| null | null | null |
Bug get_metadata_patterns arg error https://github.com/huggingface/datasets/issues/6517
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6518/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6518/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6518.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6518",
"merged_at": "2023-12-21T15:07:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6518.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6518"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6140
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6140/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6140/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6140/events
|
https://github.com/huggingface/datasets/issues/6140
| 1,845,384,712
|
I_kwDODunzps5t_lYI
| 6,140
|
Misalignment between file format specified in configs metadata YAML and the inferred builder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2023-08-10T15:07:34Z
| 2023-08-17T20:37:20Z
| 2023-08-17T20:37:20Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV):
```yaml
configs:
- config_name: default
data_files:
- split: train
path: data.csv
```
and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not appear in the configs metadata YAML.
See: https://huggingface.co/datasets/freddyaboulton/chatinterface_with_image_csv/discussions/1
CC: @freddyaboulton @polinaeterna
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6140/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6140/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4964
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4964/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4964/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4964/events
|
https://github.com/huggingface/datasets/issues/4964
| 1,368,617,322
|
I_kwDODunzps5Rk3Fq
| 4,964
|
Column of arrays (2D+) are using unreasonably high memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4",
"events_url": "https://api.github.com/users/vigsterkr/events{/privacy}",
"followers_url": "https://api.github.com/users/vigsterkr/followers",
"following_url": "https://api.github.com/users/vigsterkr/following{/other_user}",
"gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vigsterkr",
"id": 30353,
"login": "vigsterkr",
"node_id": "MDQ6VXNlcjMwMzUz",
"organizations_url": "https://api.github.com/users/vigsterkr/orgs",
"received_events_url": "https://api.github.com/users/vigsterkr/received_events",
"repos_url": "https://api.github.com/users/vigsterkr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vigsterkr",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above.",
"Seems related to issues #4623 and #4802 so it would appear this issue has been around for a few months.",
"Hi ! `Dataset.from_dict` keeps the data in memory. You can write on disk and reload them with\r\n```python\r\ndataset.save_to_disk(\"path/to/local\")\r\ndataset = load_from_disk(\"path/to/local\")\r\n```\r\nthis way you'll end up with a dataset loaded from your disk using memory mapping, and it won't fill up your RAM :)\r\n\r\nrelated to https://github.com/huggingface/datasets/issues/4861",
"@lhoestq thnx for getting back to me! i've tested the suggested method, but unfortunately the memory consumption is the very same:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Array2D, Array3D, load_from_disk\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype=\"float64\")}))\r\ndataset.save_to_disk(\"foo\")\r\n\r\nfoo_db = load_from_disk(\"foo\")\r\ncolum_value = foo_db[column_name]\r\n```\r\n\r\nthe very same happens when you create the dataset, but dont specify the feature type.\r\n\r\ni've tried running this on different envs (macOS, linux) and it's behaving the very same way.",
"When you call `colum_value = foo_db[column_name]`, you load the full column in memory.\r\n\r\nIf you want to avoid filling up your memory, you can access chunks of data instead\r\n```python\r\nembeddings = dataset[i:i + chunk_size][\"embeddings\"]\r\n```",
"@lhoestq yeah that's intentional, i.e. i really want to load the whole column into the memory. but as said above there's an unreasonable amount of overhead for the memory. the np array itself is using about 1G of memory:\r\n```\r\n>>> getsizeof(data)/1024/1024\r\n937.5001525878906\r\n```\r\nthat accessing of column above is using 10x memory compared to the original numpy array.",
"The dataset must be twice as big because we use regular arrow ListArray under the hood and not FixedSizeListArray. Basically we store unnecessary offsets.\r\n\r\nAnd this should affect performance as well. When we developed this, FixedSizeListArray still had some issues but they should be resolved on the PyArrow side now",
"A doubling would be fine. My very basic understanding of PyArrow is that using ListArray is probably related to the issue though. Using a multi-dimensional array in datasets is storing everything as strange nested 1d object arrays, which I imagine is creating the massive overhead.\r\n\r\nI think it should be a PyArrow Tensor, no?",
"PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694",
"That's... unfortunate. I didn't realize that."
] | 2022-09-10T13:07:22Z
| 2022-09-22T18:29:22Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.
## Steps to reproduce the bug
```python
from datasets import Dataset, Features, Array2D, Array3D
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")}))
```
the code above will use about 10Gb of RAM while constructing the `dataset` object.
The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column.
```python
from datasets import Dataset
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data})
dataset[column_name]
```
## Expected results
Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening.
## Actual results
Enormous memory- and runtime overhead.
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4964/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4964/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4548
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4548/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4548/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4548/events
|
https://github.com/huggingface/datasets/issues/4548
| 1,282,218,096
|
I_kwDODunzps5MbRhw
| 4,548
|
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null |
[
"I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)"
] | 2022-06-23T10:58:57Z
| 2022-06-30T10:15:32Z
| 2022-06-30T10:15:32Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored.
This happens when a directory is structured like as follows:
```
train/
file_1.jpg
file_2.jpg
test/
file_3.jpg
file_4.jpg
metadata.jsonl
```
or like as follows:
```
train_file_1.jpg
train_file_2.jpg
test_file_3.jpg
test_file_4.jpg
metadata.jsonl
```
The same for HF repos.
because it's ignored by the patterns [here](https://github.com/huggingface/datasets/blob/master/src/datasets/data_files.py#L29)
@lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4548/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4548/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7124
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7124/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7124/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7124/events
|
https://github.com/huggingface/datasets/pull/7124
| 2,485,890,442
|
PR_kwDODunzps55YzWr
| 7,124
|
Test get_dataset_config_info with non-existing/gated/private dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7124). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005339 / 0.011353 (-0.006014) | 0.003640 / 0.011008 (-0.007368) | 0.064012 / 0.038508 (0.025504) | 0.030424 / 0.023109 (0.007314) | 0.239966 / 0.275898 (-0.035932) | 0.264361 / 0.323480 (-0.059119) | 0.004247 / 0.007986 (-0.003739) | 0.002847 / 0.004328 (-0.001481) | 0.049640 / 0.004250 (0.045390) | 0.044903 / 0.037052 (0.007851) | 0.250174 / 0.258489 (-0.008315) | 0.281423 / 0.293841 (-0.012418) | 0.029419 / 0.128546 (-0.099127) | 0.012221 / 0.075646 (-0.063426) | 0.205907 / 0.419271 (-0.213365) | 0.036654 / 0.043533 (-0.006878) | 0.245805 / 0.255139 (-0.009334) | 0.265029 / 0.283200 (-0.018170) | 0.018081 / 0.141683 (-0.123602) | 1.113831 / 1.452155 (-0.338324) | 1.156443 / 1.492716 (-0.336274) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134389 / 0.018006 (0.116383) | 0.300637 / 0.000490 (0.300147) | 0.000240 / 0.000200 (0.000040) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019111 / 0.037411 (-0.018300) | 0.062585 / 0.014526 (0.048059) | 0.075909 / 0.176557 (-0.100647) | 0.121382 / 0.737135 (-0.615753) | 0.074980 / 0.296338 (-0.221359) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285062 / 0.215209 (0.069853) | 2.850130 / 2.077655 (0.772476) | 1.519877 / 1.504120 (0.015757) | 1.388711 / 1.541195 (-0.152484) | 1.397284 / 1.468490 (-0.071206) | 0.723100 / 4.584777 (-3.861677) | 2.393184 / 3.745712 (-1.352529) | 2.908418 / 5.269862 (-2.361443) | 1.871024 / 4.565676 (-2.694653) | 0.078230 / 0.424275 (-0.346045) | 0.005158 / 0.007607 (-0.002449) | 0.345622 / 0.226044 (0.119577) | 3.357611 / 2.268929 (1.088683) | 1.844492 / 55.444624 (-53.600132) | 1.584237 / 6.876477 (-5.292240) | 1.577158 / 2.142072 (-0.564915) | 0.789702 / 4.805227 (-4.015525) | 0.132045 / 6.500664 (-6.368619) | 0.042304 / 0.075469 (-0.033165) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977166 / 1.841788 (-0.864622) | 11.306118 / 8.074308 (3.231810) | 9.490778 / 10.191392 (-0.700614) | 0.143536 / 0.680424 (-0.536888) | 0.015304 / 0.534201 (-0.518897) | 0.313892 / 0.579283 (-0.265391) | 0.267009 / 0.434364 (-0.167355) | 0.345560 / 0.540337 (-0.194778) | 0.435649 / 1.386936 (-0.951287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005700 / 0.011353 (-0.005653) | 0.003490 / 0.011008 (-0.007519) | 0.049990 / 0.038508 (0.011482) | 0.032070 / 0.023109 (0.008961) | 0.272622 / 0.275898 (-0.003276) | 0.298265 / 0.323480 (-0.025215) | 0.004379 / 0.007986 (-0.003606) | 0.002786 / 0.004328 (-0.001543) | 0.048271 / 0.004250 (0.044020) | 0.040102 / 0.037052 (0.003050) | 0.286433 / 0.258489 (0.027944) | 0.319306 / 0.293841 (0.025465) | 0.032872 / 0.128546 (-0.095675) | 0.011870 / 0.075646 (-0.063776) | 0.059886 / 0.419271 (-0.359385) | 0.034281 / 0.043533 (-0.009252) | 0.275588 / 0.255139 (0.020450) | 0.292951 / 0.283200 (0.009751) | 0.018095 / 0.141683 (-0.123588) | 1.130870 / 1.452155 (-0.321285) | 1.190761 / 1.492716 (-0.301955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093346 / 0.018006 (0.075340) | 0.307506 / 0.000490 (0.307016) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022873 / 0.037411 (-0.014538) | 0.077070 / 0.014526 (0.062544) | 0.089152 / 0.176557 (-0.087404) | 0.130186 / 0.737135 (-0.606949) | 0.090244 / 0.296338 (-0.206095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297950 / 0.215209 (0.082740) | 2.942360 / 2.077655 (0.864705) | 1.614324 / 1.504120 (0.110204) | 1.495795 / 1.541195 (-0.045400) | 1.506155 / 1.468490 (0.037665) | 0.730307 / 4.584777 (-3.854470) | 0.966312 / 3.745712 (-2.779400) | 2.928955 / 5.269862 (-2.340906) | 1.940049 / 4.565676 (-2.625627) | 0.079589 / 0.424275 (-0.344686) | 0.006004 / 0.007607 (-0.001604) | 0.356630 / 0.226044 (0.130585) | 3.516652 / 2.268929 (1.247724) | 1.963196 / 55.444624 (-53.481429) | 1.674489 / 6.876477 (-5.201988) | 1.677558 / 2.142072 (-0.464514) | 0.806447 / 4.805227 (-3.998780) | 0.133819 / 6.500664 (-6.366845) | 0.040762 / 0.075469 (-0.034707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038495 / 1.841788 (-0.803293) | 11.829186 / 8.074308 (3.754878) | 10.214158 / 10.191392 (0.022766) | 0.140590 / 0.680424 (-0.539834) | 0.014729 / 0.534201 (-0.519472) | 0.300557 / 0.579283 (-0.278726) | 0.122772 / 0.434364 (-0.311592) | 0.344618 / 0.540337 (-0.195720) | 0.460064 / 1.386936 (-0.926872) |\n\n</details>\n</details>\n\n\n"
] | 2024-08-26T04:53:59Z
| 2024-08-26T06:15:33Z
| 2024-08-26T06:09:42Z
|
MEMBER
| null | null | null |
Test get_dataset_config_info with non-existing/gated/private dataset.
Related to:
- #7109
See also:
- https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7124/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7124/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7124.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7124",
"merged_at": "2024-08-26T06:09:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7124.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7124"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5446
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5446/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5446/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5446/events
|
https://github.com/huggingface/datasets/pull/5446
| 1,550,591,588
|
PR_kwDODunzps5IMyka
| 5,446
|
test v0.12.0.rc0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Wauplin I was testing it in a dedicated branch without opening a PR: https://github.com/huggingface/datasets/commits/test-hfh-0.12.0rc0",
"Oops, sorry @albertvillanova. I thought for next time I'll start the CIs before pinging everyone.\r\nI'm closing this one.",
"@Wauplin in your Slack message, you asked people from every major dependent library to check that our CI work. That is why I am checking it... :)\r\n\r\nAlso, I think for this purpose it is better to test it in a dedicated branch, rather than opening and closing a PR.",
"Yes, yes I know. Completely my fault on this one"
] | 2023-01-20T10:05:19Z
| 2023-01-20T10:43:22Z
| 2023-01-20T10:13:48Z
|
CONTRIBUTOR
| null | null | null |
DO NOT MERGE.
Only to test the CI.
cc @lhoestq @albertvillanova
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5446/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5446/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5446.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5446",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5446.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5446"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5257
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5257/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5257/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5257/events
|
https://github.com/huggingface/datasets/pull/5257
| 1,452,656,891
|
PR_kwDODunzps5DFENm
| 5,257
|
remove an unused statement
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-11-17T04:00:50Z
| 2022-11-18T11:04:08Z
| 2022-11-18T11:04:08Z
|
CONTRIBUTOR
| null | null | null |
remove the unused statement: `input_pairs = list(zip())`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5257/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5257/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5257.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5257",
"merged_at": "2022-11-18T11:04:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5257.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5257"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5477
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5477/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5477/events
|
https://github.com/huggingface/datasets/issues/5477
| 1,559,909,892
|
I_kwDODunzps5c-lYE
| 5,477
|
Unpin sqlalchemy once issue is fixed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@albertvillanova It looks like that issue has been fixed so I made a PR to unpin sqlalchemy! ",
"The source issue:\r\n- https://github.com/pandas-dev/pandas/issues/40686\r\n\r\nhas been fixed:\r\n- https://github.com/pandas-dev/pandas/pull/48576\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-2.0.0`:\r\n- https://github.com/pandas-dev/pandas/releases/tag/v2.0.0\r\n\r\nbut it will not be back-ported to `pandas-1`:\r\n- https://github.com/pandas-dev/pandas/pull/48576#issuecomment-1466467159\r\n\r\nAlso note that `pandas-2.0.0` dropped support for Python 3.7:\r\n- https://github.com/pandas-dev/pandas/issues/41678\r\n- https://github.com/pandas-dev/pandas/pull/41989\r\n\r\nTherefore, we cannot unpin `sqlalchemy` until we drop support for Python 3.7 (these Python users cannot use `pandas-2`)."
] | 2023-01-27T15:01:55Z
| 2024-01-26T14:50:45Z
| 2024-01-26T14:50:45Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5477/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5391
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5391/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5391/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5391/events
|
https://github.com/huggingface/datasets/issues/5391
| 1,510,350,400
|
I_kwDODunzps5aBh5A
| 5,391
|
Whisper Event - RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12885107?v=4",
"events_url": "https://api.github.com/users/catswithbats/events{/privacy}",
"followers_url": "https://api.github.com/users/catswithbats/followers",
"following_url": "https://api.github.com/users/catswithbats/following{/other_user}",
"gists_url": "https://api.github.com/users/catswithbats/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/catswithbats",
"id": 12885107,
"login": "catswithbats",
"node_id": "MDQ6VXNlcjEyODg1MTA3",
"organizations_url": "https://api.github.com/users/catswithbats/orgs",
"received_events_url": "https://api.github.com/users/catswithbats/received_events",
"repos_url": "https://api.github.com/users/catswithbats/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/catswithbats/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/catswithbats/subscriptions",
"type": "User",
"url": "https://api.github.com/users/catswithbats",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hey @catswithbats! Super sorry for the late reply! This is happening because there is data with label length (504) that exceeds the model's max length (448). \r\n\r\nThere are two options here:\r\n1. Increase the model's `max_length` parameter: \r\n```python\r\nmodel.config.max_length = 512\r\n```\r\n2. Filter data with labels longer than max length: https://discuss.huggingface.co/t/open-to-the-community-whisper-fine-tuning-event/26681/21?u=sanchit-gandhi\r\n\r\nNote that the datasets repo is reserved for issues directly related to the HF datasets library. Issues related to custom fine-tuning implementations are more applicable to the HF Forum: https://discuss.huggingface.co. You're more likely to get a response by posting your issue in the most applicable place and boost the chance of someone sharing a working solution!",
"@sanchit-gandhi Thank you for all your work on this topic.\r\n\r\nI'm finding that changing the `max_length` value does not make this error go away."
] | 2022-12-25T15:17:14Z
| 2023-07-21T14:29:47Z
| 2023-07-21T14:29:47Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Done in a VM with a GPU (Ubuntu) following the [Whisper Event - PYTHON](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#python-script) instructions.
Attempted using [RuntimeError: he size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 - WEB](https://discuss.huggingface.co/t/trainer-runtimeerror-the-size-of-tensor-a-462-must-match-the-size-of-tensor-b-448-at-non-singleton-dimension-1/26010/10 ) - another person experiencing the same issue. But could not resolve the issue with the google/fleurs data. __Not clear what can be modified in the PY code to resolve the input data size mismatch, as the training data is already very small__.
Tried posting on Discord, @sanchit-gandhi and @vaibhavs10. Was hoping that the event is over and some input/help is now available. [Hugging Face - whisper-small-amet](https://huggingface.co/drmeeseeks/whisper-small-amet).
The paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) am_et is a low resource language (Table E), with the WER results ranging from 120-229, based on model size. (Whisper small WER=120.2).
# ---> Initial Training Output
/usr/local/lib/python3.8/dist-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
[INFO|trainer.py:1641] 2022-12-18 05:23:28,799 >> ***** Running training *****
[INFO|trainer.py:1642] 2022-12-18 05:23:28,799 >> Num examples = 446
[INFO|trainer.py:1643] 2022-12-18 05:23:28,799 >> Num Epochs = 72
[INFO|trainer.py:1644] 2022-12-18 05:23:28,799 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1645] 2022-12-18 05:23:28,799 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1646] 2022-12-18 05:23:28,799 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1647] 2022-12-18 05:23:28,800 >> Total optimization steps = 1000
[INFO|trainer.py:1648] 2022-12-18 05:23:28,801 >> Number of trainable parameters = 241734912
# ---> Error
14% 9/65 [07:07<48:34, 52.04s/it][INFO|configuration_utils.py:523] 2022-12-18 05:03:07,941 >> Generate config GenerationConfig {
"begin_suppress_tokens": [
220,
50257
],
"bos_token_id": 50257,
"decoder_start_token_id": 50258,
"eos_token_id": 50257,
"max_length": 448,
"pad_token_id": 50257,
"transformers_version": "4.26.0.dev0",
"use_cache": false
}
Traceback (most recent call last):
File "run_speech_recognition_seq2seq_streaming.py", line 629, in <module>
main()
File "run_speech_recognition_seq2seq_streaming.py", line 578, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1534, in train
return inner_training_loop(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1859, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2122, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer_seq2seq.py", line 78, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2818, in evaluate
output = eval_loop(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 3000, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer_seq2seq.py", line 213, in prediction_step
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py", line 1197, in forward
outputs = self.model(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py", line 1066, in forward
decoder_outputs = self.decoder(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py", line 873, in forward
hidden_states = inputs_embeds + positions
RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1
100% 1000/1000 [2:52:21<00:00, 10.34s/it]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5391/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5391/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4670
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4670/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4670/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4670/events
|
https://github.com/huggingface/datasets/issues/4670
| 1,299,984,246
|
I_kwDODunzps5NfC92
| 4,670
|
Can't extract files from `.7z` zipfile using `download_and_extract`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps://github.com/huggingface/datasets/blob/fedf891a08bfc77041d575fad6c26091bc0fce52/datasets/samsum/samsum.py#L106-L110\r\n",
"Related to this issue: https://github.com/huggingface/datasets/issues/3541",
"Sure, let me look into and check what can be done. Will keep you guys updated here!",
"Initially, I thought of solving this without any external dependency. Almost everywhere I saw `lzma` can be used for this but there is a caveat that lzma doesn’t work with 7z archives but only single files. In my case the 7z archive has multiple files so it didn't work. Is it fine to use external library here?",
"Hi @bhavitvyamalik, thanks for your investigation.\r\n\r\nOn Monday, I started a PR that will eventually close this issue as well: I'm linking it to this.\r\n- #4672\r\n\r\nLet me know what you think. "
] | 2022-07-10T18:16:49Z
| 2022-07-15T13:02:07Z
| 2022-07-15T13:02:07Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:
```
>>> dataset = load_dataset("./datasets/mantis/")
Using custom data configuration default
Downloading and preparing dataset mantis/default to /Users/bhavitvyamalik/.cache/huggingface/datasets/mantis/default/1.1.0/611affa804ec53e2055a335cc1b8b213bb5a0b5142d919967729d5ee23c6bab4...
Downloading data: 100%|█████████████████████████████████████████████████████████| 77.2M/77.2M [00:23<00:00, 3.28MB/s]
/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/load.py", line 1745, in load_dataset
use_auth_token=use_auth_token,
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6/merged_train.json'
```
just before generating the splits. I checked `fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6` file and it's `7z` zip file (similar to downloaded Google drive file) which means it didn't get unzip. Do I need to unzip it separately and then pass the paths for train,dev,test files in `SplitGenerator`?
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.8
- PyArrow version: 5.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4670/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4670/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5833
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5833/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5833/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5833/events
|
https://github.com/huggingface/datasets/issues/5833
| 1,702,280,682
|
I_kwDODunzps5ldr3q
| 5,833
|
Unable to push dataset - `create_pr` problem
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17645711?v=4",
"events_url": "https://api.github.com/users/agombert/events{/privacy}",
"followers_url": "https://api.github.com/users/agombert/followers",
"following_url": "https://api.github.com/users/agombert/following{/other_user}",
"gists_url": "https://api.github.com/users/agombert/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/agombert",
"id": 17645711,
"login": "agombert",
"node_id": "MDQ6VXNlcjE3NjQ1NzEx",
"organizations_url": "https://api.github.com/users/agombert/orgs",
"received_events_url": "https://api.github.com/users/agombert/received_events",
"repos_url": "https://api.github.com/users/agombert/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/agombert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agombert/subscriptions",
"type": "User",
"url": "https://api.github.com/users/agombert",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @agombert.\r\n\r\nIn this case, I think the root issue is authentication: before pushing to Hub, you should authenticate. See our docs: https://huggingface.co/docs/datasets/upload_dataset#upload-with-python\r\n> 2. To upload a dataset on the Hub in Python, you need to log in to your Hugging Face account:\r\n ```\r\n huggingface-cli login\r\n ```",
"Hey @albertvillanova well I actually did :D \r\n\r\n<img width=\"1079\" alt=\"Capture d’écran 2023-05-09 à 18 02 58\" src=\"https://github.com/huggingface/datasets/assets/17645711/e091aa20-06b1-4dd3-bfdb-35e832c66f8d\">\r\n",
"That is weird that you get a Forbidden error if you are properly authenticated...\r\n\r\nToday we had a big outage issue affecting the Hugging Face Hub. Could you please retry to push_to_hub your dataset? Maybe that was the cause...",
"Yes I've just tried again and same error 403 :/",
"Login successful but also got this error \"Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request\"",
"Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.",
"> Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.\r\n\r\nI generate a token with write role. It works! thank you so much.",
"@dmitrijsk amazing thanks so much ! \r\nThe error should be clearer when the token is read-only – I wasted a lot of time there..",
"Based on the number of reactions (https://github.com/huggingface/datasets/issues/5833#issuecomment-1586521001), many users have issues debugging this. @Wauplin Maybe a more informative error can be thrown in `hfh` if a token's role is insufficient for an op. WDYT?",
"Yes indeed. I created an issue some time ago about it: https://github.com/huggingface/huggingface_hub/issues/1653. I'll prioritize it more and let you know. Thanks for the ping.",
"Hey everyone :wave: The error message has been fixed to be more informative. As mentioned in https://github.com/huggingface/datasets/issues/5833#issuecomment-1586521001, the n°1 reason why this is happening is that a `read` token has been used instead of `write`. The fix has being shipped on the server meaning that you don't need to update any client library. The new error message looks like this: \r\n\r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/models/Wauplin/test_recovered/preupload/main (Request ID: Root=1-6532752e-1ee492b070d9e3020e68bddc;25ca3387-44bc-433e-b49f-6e290305ed10)\r\n\r\nForbidden: you must use a write token to upload to a repository.\r\n```\r\n\r\n---\r\n\r\ncc @mariosasko I let you close this issue if you feel it's completely solved",
"@Wauplin Reopening it. Indeed, the above error message is thrown if pushing to an **existing** repo with a `read` token. However, if the repo does not exist and needs to be created (by calling `create_repo` in `push_to_hub`), then passing a `read` token will raise the following:\r\n```python\r\nHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/repos/create\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nHfHubHTTPError Traceback (most recent call last)\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)\r\n 318 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\r\n 319 # as well (request id and/or server error message)\r\n--> 320 raise HfHubHTTPError(str(e), response=response) from e\r\n 321 \r\n 322 \r\n\r\nHfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/repos/create (Request ID: Root=1-653291a1-0e9cc2b16c049ff510834d47;7edeb8a3-4bf1-4d9b-8f7f-06d430da9ee7)\r\n\r\nYou don't have the rights to create a dataset under this namespace\r\n```\r\n\r\nI think this error message should also be more informative!",
"> However, if the repo does not exist and needs to be created (by calling create_repo in push_to_hub), then passing a read token will raise the following:\r\n\r\nThis seems to be a different issue than the one reported above right? Still agree that a more informative message would be nice. Can you open an issue on moon-landing for it please? (not sure I can open a PR myself for this one :grimacing: )",
"@Wauplin Done :)"
] | 2023-05-09T15:32:55Z
| 2023-10-24T18:22:29Z
| 2023-10-24T18:22:29Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I can't upload to the hub the dataset I manually created locally (Image dataset). I have a problem when using the method `.push_to_hub` which asks for a `create_pr` attribute which is not compatible.
### Steps to reproduce the bug
here what I have:
```python
dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts")
```
Output:
```python
Pushing split train to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:00<?, ?it/s]
Creating parquet from Arrow format: 0%| | 0/3 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 12.70ba/s]
Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:01<?, ?it/s]
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:259, in hf_raise_for_status(response, endpoint_name)
258 try:
--> 259 response.raise_for_status()
260 except HTTPError as e:
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
Cell In[7], line 1
----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts")
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/dataset_dict.py:1583, in DatasetDict.push_to_hub(self, repo_id, private, token, branch, max_shard_size, num_shards, embed_external_files)
1581 logger.warning(f"Pushing split {split} to the Hub.")
1582 # The split=key needs to be removed before merging
-> 1583 repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(
1584 repo_id,
1585 split=split,
1586 private=private,
1587 token=token,
1588 branch=branch,
1589 max_shard_size=max_shard_size,
1590 num_shards=num_shards.get(split),
1591 embed_external_files=embed_external_files,
1592 )
1593 total_uploaded_size += uploaded_size
1594 total_dataset_nbytes += dataset_nbytes
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/arrow_dataset.py:5275, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, num_shards, embed_external_files)
5273 shard.to_parquet(buffer)
5274 uploaded_size += buffer.tell()
-> 5275 _retry(
5276 api.upload_file,
5277 func_kwargs={
5278 "path_or_fileobj": buffer.getvalue(),
5279 "path_in_repo": shard_path_in_repo,
5280 "repo_id": repo_id,
5281 "token": token,
5282 "repo_type": "dataset",
5283 "revision": branch,
5284 },
5285 exceptions=HTTPError,
5286 status_codes=[504],
5287 base_wait_time=2.0,
5288 max_retries=5,
5289 max_wait_time=20.0,
5290 )
5291 shards_path_in_repo.append(shard_path_in_repo)
5293 # Cleanup to remove unused files
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:285, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
283 except exceptions as err:
284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
--> 285 raise err
286 else:
287 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:282, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
280 while True:
281 try:
--> 282 return func(*func_args, **func_kwargs)
283 except exceptions as err:
284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2998, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, commit_message, commit_description, create_pr, parent_commit)
2990 commit_message = (
2991 commit_message if commit_message is not None else f"Upload {path_in_repo} with huggingface_hub"
2992 )
2993 operation = CommitOperationAdd(
2994 path_or_fileobj=path_or_fileobj,
2995 path_in_repo=path_in_repo,
2996 )
-> 2998 commit_info = self.create_commit(
2999 repo_id=repo_id,
3000 repo_type=repo_type,
3001 operations=[operation],
3002 commit_message=commit_message,
3003 commit_description=commit_description,
3004 token=token,
3005 revision=revision,
3006 create_pr=create_pr,
3007 parent_commit=parent_commit,
3008 )
3010 if commit_info.pr_url is not None:
3011 revision = quote(_parse_revision_from_pr_url(commit_info.pr_url), safe="")
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2548, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit)
2546 try:
2547 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params)
-> 2548 hf_raise_for_status(commit_resp, endpoint_name="commit")
2549 except RepositoryNotFoundError as e:
2550 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name)
297 raise BadRequestError(message, response=response) from e
299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
300 # as well (request id and/or server error message)
--> 301 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main (Request ID: Root=1-645a66bf-255ad91602a6404e6cb70fba)
Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request
```
And then when I do
```python
dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1)
```
I get
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[8], line 1
----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1)
TypeError: push_to_hub() got an unexpected keyword argument 'create_pr'
```
### Expected behavior
I would like to have the dataset updloaded [here](https://huggingface.co/datasets/agomberto/FrenchCensus-handwritten-texts).
### Environment info
```bash
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 1.5.3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5833/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5833/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6806
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6806/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6806/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6806/events
|
https://github.com/huggingface/datasets/pull/6806
| 2,239,435,074
|
PR_kwDODunzps5sc8Mb
| 6,806
|
Fix hf-internal-testing/dataset_with_script commit SHA in CI test
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6806). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006285) | 0.003613 / 0.011008 (-0.007395) | 0.063226 / 0.038508 (0.024718) | 0.030653 / 0.023109 (0.007544) | 0.243981 / 0.275898 (-0.031918) | 0.268596 / 0.323480 (-0.054884) | 0.003109 / 0.007986 (-0.004876) | 0.003292 / 0.004328 (-0.001036) | 0.048857 / 0.004250 (0.044606) | 0.043929 / 0.037052 (0.006876) | 0.264002 / 0.258489 (0.005513) | 0.289028 / 0.293841 (-0.004813) | 0.028053 / 0.128546 (-0.100493) | 0.010837 / 0.075646 (-0.064809) | 0.208084 / 0.419271 (-0.211188) | 0.035592 / 0.043533 (-0.007941) | 0.252639 / 0.255139 (-0.002500) | 0.267599 / 0.283200 (-0.015600) | 0.018097 / 0.141683 (-0.123585) | 1.150811 / 1.452155 (-0.301344) | 1.219449 / 1.492716 (-0.273267) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095427 / 0.018006 (0.077421) | 0.307270 / 0.000490 (0.306781) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018713 / 0.037411 (-0.018698) | 0.065238 / 0.014526 (0.050712) | 0.074650 / 0.176557 (-0.101906) | 0.120130 / 0.737135 (-0.617005) | 0.078457 / 0.296338 (-0.217882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283666 / 0.215209 (0.068457) | 2.852818 / 2.077655 (0.775163) | 1.459790 / 1.504120 (-0.044330) | 1.326732 / 1.541195 (-0.214463) | 1.373530 / 1.468490 (-0.094960) | 0.579136 / 4.584777 (-4.005641) | 2.388369 / 3.745712 (-1.357343) | 2.813786 / 5.269862 (-2.456075) | 1.730079 / 4.565676 (-2.835597) | 0.063445 / 0.424275 (-0.360831) | 0.005355 / 0.007607 (-0.002252) | 0.340169 / 0.226044 (0.114124) | 3.391220 / 2.268929 (1.122291) | 1.838003 / 55.444624 (-53.606621) | 1.523518 / 6.876477 (-5.352959) | 1.574007 / 2.142072 (-0.568065) | 0.650265 / 4.805227 (-4.154962) | 0.117114 / 6.500664 (-6.383550) | 0.042430 / 0.075469 (-0.033039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955596 / 1.841788 (-0.886191) | 11.546544 / 8.074308 (3.472236) | 9.593613 / 10.191392 (-0.597779) | 0.141502 / 0.680424 (-0.538922) | 0.014251 / 0.534201 (-0.519950) | 0.293825 / 0.579283 (-0.285458) | 0.263088 / 0.434364 (-0.171276) | 0.325035 / 0.540337 (-0.215302) | 0.419372 / 1.386936 (-0.967564) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005567 / 0.011353 (-0.005785) | 0.003670 / 0.011008 (-0.007338) | 0.050338 / 0.038508 (0.011830) | 0.031730 / 0.023109 (0.008621) | 0.278307 / 0.275898 (0.002409) | 0.303170 / 0.323480 (-0.020310) | 0.004276 / 0.007986 (-0.003709) | 0.002720 / 0.004328 (-0.001609) | 0.048675 / 0.004250 (0.044425) | 0.041026 / 0.037052 (0.003974) | 0.291353 / 0.258489 (0.032864) | 0.318487 / 0.293841 (0.024646) | 0.029676 / 0.128546 (-0.098870) | 0.010428 / 0.075646 (-0.065218) | 0.057443 / 0.419271 (-0.361828) | 0.032735 / 0.043533 (-0.010798) | 0.282900 / 0.255139 (0.027761) | 0.297539 / 0.283200 (0.014339) | 0.018237 / 0.141683 (-0.123446) | 1.188047 / 1.452155 (-0.264107) | 1.223283 / 1.492716 (-0.269433) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090629 / 0.018006 (0.072623) | 0.300898 / 0.000490 (0.300408) | 0.000212 / 0.000200 (0.000012) | 0.000133 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022200 / 0.037411 (-0.015211) | 0.075310 / 0.014526 (0.060784) | 0.086790 / 0.176557 (-0.089766) | 0.127392 / 0.737135 (-0.609744) | 0.088435 / 0.296338 (-0.207903) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301308 / 0.215209 (0.086099) | 2.963126 / 2.077655 (0.885471) | 1.639604 / 1.504120 (0.135484) | 1.508776 / 1.541195 (-0.032419) | 1.553280 / 1.468490 (0.084789) | 0.567256 / 4.584777 (-4.017520) | 2.445231 / 3.745712 (-1.300482) | 2.884071 / 5.269862 (-2.385791) | 1.777321 / 4.565676 (-2.788355) | 0.063659 / 0.424275 (-0.360616) | 0.005435 / 0.007607 (-0.002172) | 0.361786 / 0.226044 (0.135742) | 3.624264 / 2.268929 (1.355335) | 2.022661 / 55.444624 (-53.421963) | 1.740581 / 6.876477 (-5.135896) | 1.748503 / 2.142072 (-0.393570) | 0.660783 / 4.805227 (-4.144444) | 0.118045 / 6.500664 (-6.382619) | 0.040940 / 0.075469 (-0.034529) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.015614 / 1.841788 (-0.826174) | 12.094985 / 8.074308 (4.020677) | 10.435581 / 10.191392 (0.244189) | 0.140239 / 0.680424 (-0.540185) | 0.014992 / 0.534201 (-0.519209) | 0.290549 / 0.579283 (-0.288735) | 0.274718 / 0.434364 (-0.159645) | 0.334783 / 0.540337 (-0.205554) | 0.426540 / 1.386936 (-0.960396) |\n\n</details>\n</details>\n\n\n"
] | 2024-04-12T08:47:50Z
| 2024-04-12T09:08:23Z
| 2024-04-12T09:02:12Z
|
MEMBER
| null | null | null |
Fix test using latest commit SHA in hf-internal-testing/dataset_with_script dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script/commits/refs%2Fconvert%2Fparquet
Fix #6796.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6806/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6806/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6806",
"merged_at": "2024-04-12T09:02:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6806"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6386
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6386/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6386/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6386/events
|
https://github.com/huggingface/datasets/issues/6386
| 1,979,878,014
|
I_kwDODunzps52Aop-
| 6,386
|
Formatting overhead
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d-miketa",
"id": 320321,
"login": "d-miketa",
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d-miketa",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Ah I think the `line-profiler` log is off-by-one and it is in fact the `extract_batch` method that's taking forever. Will investigate further.",
"I tracked it down to a quirk of my setup. Apologies."
] | 2023-11-06T19:06:38Z
| 2023-11-06T23:56:12Z
| 2023-11-06T23:56:12Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new instances of `self.python_arrow_extractor`. I admit I'm confused why that could be the case - as far as I can tell there's no complex `__init__` logic to execute.

### Steps to reproduce the bug
1. Set up a dataset `ds` with potentially several (4+) columns (not sure if this is necessary, but it did at one point of the investigation make overhead worse)
2. Process it using a custom transform, `ds = ds.with_transform(transform_func)`
3. Decorate this function https://github.com/huggingface/datasets/blob/main/src/datasets/formatting/formatting.py#L512 with `@profile` from https://pypi.org/project/line-profiler/
4. Profile with `$ kernprof -l script_to_profile.py`
### Expected behavior
Batch formatting should have acceptable overhead.
### Environment info
```
datasets=2.14.6
pyarrow=14.0.0
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d-miketa",
"id": 320321,
"login": "d-miketa",
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d-miketa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6386/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6386/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5151
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5151/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5151/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5151/events
|
https://github.com/huggingface/datasets/issues/5151
| 1,420,791,163
|
I_kwDODunzps5Ur417
| 5,151
|
Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
] | null |
[
"also asked in https://discuss.huggingface.co/t/create-multiple-dataset-configs-with-push-to-hub-method/25480"
] | 2022-10-24T12:59:18Z
| 2022-11-04T14:55:20Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Now one can push only different splits within one default config of a dataset.
Would be nice to allow something like:
```
ds.push_to_hub(repo_name, config=config_name)
```
I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs for packaged modules datasets.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5151/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5151/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5798
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5798/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5798/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5798/events
|
https://github.com/huggingface/datasets/issues/5798
| 1,685,904,526
|
I_kwDODunzps5kfNyO
| 5,798
|
Support parallelized downloading and processing in load_dataset with Spark
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4",
"events_url": "https://api.github.com/users/es94129/events{/privacy}",
"followers_url": "https://api.github.com/users/es94129/followers",
"following_url": "https://api.github.com/users/es94129/following{/other_user}",
"gists_url": "https://api.github.com/users/es94129/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/es94129",
"id": 12763339,
"login": "es94129",
"node_id": "MDQ6VXNlcjEyNzYzMzM5",
"organizations_url": "https://api.github.com/users/es94129/orgs",
"received_events_url": "https://api.github.com/users/es94129/received_events",
"repos_url": "https://api.github.com/users/es94129/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/es94129/subscriptions",
"type": "User",
"url": "https://api.github.com/users/es94129",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! We're using process pools for parallelism right now. I was wondering if there's a package that implements the same API as a process pool but runs with Spark under the hood ? That or something similar would be cool because users could use whatever distributed framework they want this way.\r\n\r\nFeel free to ping us when you'd like to open PRs for this kind of things, so that we can discuss this before you start working on it ^^",
"Hi, thanks for taking a look and providing your input! I don't know of such packages, and even it exists, I don't think with the process pool API it's possible to run Spark as backend properly; otherwise I understand a unified API would be preferable.\r\n\r\nThe process pool API requires splitting the workload to a fixed number parts for multiprocessing; meanwhile distributed framework such as Spark has sophisticated scheduler to distribute the workload to the processes on multiple machines in a cluster, so the way of splitting things for `multiprocessing.pool` would not suit / be as flexible as directly calling the `sparkContext.parallelize` API.\r\n\r\nI think this could be a good addition to scale the `datasets` implementation to distributed workers, and from my benchmark results so far it looks promising compared with multiprocessing.",
"I see ! I think we only need an equivalent of `pool.map`. We use it to run download and conversion of data files on disk. That would require less changes in the internal code - and therefore less tests to write ;)\r\n\r\nWe also use `pool.apply_async` in some places with a `Queue` to get progress updates of the running jobs. I'm mentioning this in case there's a way to get a python generator from a running spark job ? This is less important though",
"For Spark, `rdd.map` (where `rdd` can be created by `sparkContext.parallelize`) is the most similar as `pool.map`, but it requires creating a Spark RDD first that is used for distributing the `iterable` and the actual parallelization is managed by the Spark framework; `pool.map` takes the splits of `iterable` that are split into `num_proc` parts by the Python code. You can also check my PR #5807 in the `src/datasets/utils/py_utils.py` file to compare the differences of the APIs, it might make more sense than the the above description.\r\n\r\nGiven the different inputs and mechanisms of calling the `map` functions, this is why I think it's not that feasible to reuse most of the `multiprocessing` code.\r\n\r\nProgress bar updating might be challenging with Spark, I'll consider it as a followup work.",
"Indeed I think the current use of multiprocessing.Pool in `map_nested` can be rewritten to work like `sparkContext.parallelize` - without splitting the iterable.\r\n\r\nMaybe from the user's perspective it's ok to let multiprocessing.Pool or spark distribute the load on their own, as long as it takes a list and runs jobs in parallel in the end :)\r\n",
"From your feedback, seems to me there are two paths to consider now for supporting spark's `map` function in `map_nested` now:\r\n1. Keep the current `pool.map` implementation, and add an if statement for the spark's `map` code (which is what I did in my current PR) -- the code change is just a few lines in the `map_nested` function, and it has been tested by unit tests + manual testing on real Spark clusters; if you have other concerns I'd also be happy to address them.\r\n2. Rewrite the current `pool.map` implementation to remove splitting the iterable, and we will still need to add an if statement to use either\r\n```python\r\nwith Pool(...) as pool:\r\n mapped = pool.map(_single_map_nested, iterable)\r\n```\r\nor\r\n```python\r\nrdd = spark.sparkContext.parallelize(iterable)\r\nmapped = rdd.map(lambda obj: _single_map_nested((function, obj, types, None, True, None))).collect()\r\n```\r\nbecause there is no unified API that supports both `pool.map` and `rdd.map`. This can be more unified and flexible in the long run, but might require more work, and it will change the existing multiprocessing behavior, which is why I'm not leaning towards this option.\r\n\r\nAm I understanding correctly?",
"Yup correct ! I think it's a nice path because it would be possible for users to define whatever parallel processing backend they want. I think we still need to discuss how that would look like in the `datasets` API : how to specify it has to use the \"spark\" parallel backend ? And how to specify the spark session parameters (number of executors etc.) ? Maybe there is something more practical than `use_spark=True`\r\n\r\nI'll check with the team internally if they have some ideas, but feel free to share your thoughts here !",
"Sure, please let me know if you have more updates regarding the API and implementation from the team.\r\n\r\nFor parameters we don't need to worry about setting them for Spark, because Spark will figure out the environment / number of worker nodes by itself, so it's preferable to just provide some parameter such as `use_spark` to use the RDD `map` function.",
"Hi! I wanted to check in to see if there is any update from the team.\r\n\r\nA potential change of API I can think of is change the argument to `distributed_backend=...`, which accepts `str`, such as `load_dataset(..., distributed_backend=\"spark\")`.\r\n\r\nImplementation wise, we can add a class / function to abstract away the details of using multiprocessing vs. spark vs. other parallel processing frameworks in `map_nested` and `_prepare_split`.",
"I found this quite interesting: https://github.com/joblib/joblib-spark with this syntax:\r\n\r\n```python\r\nwith parallel_backend('spark', n_jobs=3):\r\n ...\r\n```\r\n\r\ncc @lu-wang-dl who might know better",
"Joblib spark is providing Spark backend for joblib. We can implement a general parallel backend like\r\n```\r\nwith parallel_backend(\"<parallel-backedn>\", n_jobs=..):\r\n```\r\n\r\nIt can support multiprocessing , spark, ray, and etc. https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend",
"Thank you @lhoestq for finding this repo. I validated that it can distribute downloading jobs with Spark to arbitrary cluster worker nodes evenly with `n_jobs=-1`.\r\n\r\nFor the API, I think it makes sense to define it as\r\n```python\r\nload_dataset(..., parallel_backend=<str>)\r\n```\r\nwhere `parallel_backend` can be `spark`, `multiprocessing`, and potentially other supported joblib backends including `ray` and `dask`.\r\n\r\nImplementation-wise, do you think it is better to just use `joblib` for `spark` backend in `map_nested`, or also migrate the `multiprocessing.Pool` code to use `joblib`?",
"Hello @lhoestq, I wanted to follow up on my previous comment with some prototyping code that demonstrates how `map_nested` would be like if we unify `multiprocessing` and `spark` with `joblib`. The snippet hasn't hashed out the details such as dealing with `tqdm` yet.\r\n\r\nIn terms of API, the way of using multiprocessing is still the same; for Spark, the user sets `parallel_backend='spark'` can reuse the `num_proc` argument to pass in the number of executors, or preferably, just set `num_proc=-1` and joblib is able to decide it (I've validated it by running it on a Spark cluster).\r\n\r\n```python\r\ndef map_nested(\r\n # ... same args\r\n parallel_backend: Optional[str] = None, # proposed new argument\r\n):\r\n\r\n # ... same code\r\n\r\n # allow user to specify num_proc=-1, so that joblib will optimize it\r\n if (num_proc <= 1 and num_proc != -1) or len(iterable) < parallel_min_length:\r\n # same code\r\n mapped = [\r\n _single_map_nested((function, obj, types, None, True, None))\r\n for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n ]\r\n else:\r\n if not parallel_backend:\r\n parallel_backend = 'loky' # 'loky' is joblib's own implementation of robust multiprocessing\r\n \r\n n_jobs = min(num_proc, len(iterable))\r\n\r\n if parallel_backend == 'spark':\r\n n_jobs = -1 # 'loky' is joblib's own implementation of robust multiprocessing\r\n from joblibspark import register_spark\r\n register_spark()\r\n\r\n # parallelized with the same API\r\n with joblib.parallel_backend(parallel_backend, n_jobs=n_jobs):\r\n mapped = joblib.Parallel()(\r\n joblib.delayed(\r\n _single_map_nested((function, obj, types, None, True, None))\r\n )(obj) for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n )\r\n \r\n # ... same code\r\n```\r\nWe can always `joblib` for Spark and other distributed backends such as Ray if people want to support them later. It's worth noting that some distributed backends do not currently have `joblib` implementations.\r\n\r\nI would appreciate your thoughts on this proposed new API. We can also discuss the pros and cons of migrating the `multiprocessing` code to `joblib` later.",
"Nice ! It should be quite easy to make the change then :)\r\n\r\nI think adding spark support can actually be less than 20 lines of code and would roughly require one line of code to change in map_nested:\r\n\r\nMaybe we can define a new `datasets.parallel` submodule that has the `parallel_backend()` context manager and a `parallel_map()` function that uses `Pool.map` by default and `joblib` otherwise.\r\n\r\n`joblib` would be an optional dependency, and `joblib-spark` as well.\r\n\r\nThen whenever someone wants to use Spark, they can do something like this (similar to scikit-learn parallel_backend):\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\"):\r\n ds = load_dataset(...)\r\n```\r\n\r\nWhat do you think ?",
"Although until we've switched to all the steps in `load_dataset` to use `datasets.parallel`, I would require the user to explicitly say which step should use Spark. Maybe something like this, but I'm not sure yet:\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\"]):\r\n ds = load_dataset(...)\r\n```\r\nfor now some steps can be NotImplemented:\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\", \"prepare\"]):\r\n# NotImplementedError: the \"prepare\" step that converts the raw data files to Arrow is not compatible with the \"spark\" backend yet\r\n```\r\n\r\nThis way we can progressively roll out Spark support for the other data loading/processing steps without breaking changes between `datasets` versions",
"Sounds good! I like the partial rollout idea.\r\nSo for example `map_nested` would call `parallel_map` under the hood if `num_proc != 1` or `parallel_backend` is specified right?\r\nI would be happy to start a PR next week to explore this path.",
"Awesome ! I think map_nested can call `parallel_map()` if num_proc > 1, and `parallel_map` can be responsible to use Pool.map by default or joblib."
] | 2023-04-27T00:16:11Z
| 2023-05-25T14:11:41Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes.
```python
load_dataset(..., use_spark=True)
```
### Motivation
Further speed up `dl_manager.download` and `_prepare_split` by distributing the workloads to worker nodes.
### Your contribution
I can submit a PR to support this.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5798/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5798/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6952
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6952/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6952/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6952/events
|
https://github.com/huggingface/datasets/pull/6952
| 2,333,320,411
|
PR_kwDODunzps5xaosH
| 6,952
|
Move info_utils errors to exceptions module
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6952). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003744 / 0.011008 (-0.007264) | 0.064089 / 0.038508 (0.025581) | 0.032409 / 0.023109 (0.009300) | 0.255886 / 0.275898 (-0.020013) | 0.276033 / 0.323480 (-0.047447) | 0.004165 / 0.007986 (-0.003821) | 0.002741 / 0.004328 (-0.001588) | 0.052145 / 0.004250 (0.047894) | 0.043863 / 0.037052 (0.006811) | 0.258844 / 0.258489 (0.000355) | 0.290108 / 0.293841 (-0.003733) | 0.027390 / 0.128546 (-0.101156) | 0.010543 / 0.075646 (-0.065103) | 0.206936 / 0.419271 (-0.212335) | 0.036778 / 0.043533 (-0.006755) | 0.254331 / 0.255139 (-0.000808) | 0.279037 / 0.283200 (-0.004163) | 0.018564 / 0.141683 (-0.123119) | 1.112765 / 1.452155 (-0.339390) | 1.160099 / 1.492716 (-0.332617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092148 / 0.018006 (0.074142) | 0.297156 / 0.000490 (0.296667) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018797 / 0.037411 (-0.018615) | 0.062992 / 0.014526 (0.048466) | 0.076361 / 0.176557 (-0.100195) | 0.121168 / 0.737135 (-0.615968) | 0.075845 / 0.296338 (-0.220494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293842 / 0.215209 (0.078633) | 2.880720 / 2.077655 (0.803065) | 1.477779 / 1.504120 (-0.026341) | 1.345136 / 1.541195 (-0.196059) | 1.352153 / 1.468490 (-0.116337) | 0.574722 / 4.584777 (-4.010055) | 2.373925 / 3.745712 (-1.371787) | 2.750704 / 5.269862 (-2.519157) | 1.725979 / 4.565676 (-2.839697) | 0.063006 / 0.424275 (-0.361269) | 0.005019 / 0.007607 (-0.002588) | 0.341228 / 0.226044 (0.115184) | 3.352576 / 2.268929 (1.083647) | 1.821363 / 55.444624 (-53.623261) | 1.529441 / 6.876477 (-5.347036) | 1.543401 / 2.142072 (-0.598671) | 0.634282 / 4.805227 (-4.170945) | 0.115565 / 6.500664 (-6.385099) | 0.042514 / 0.075469 (-0.032956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987532 / 1.841788 (-0.854255) | 11.483853 / 8.074308 (3.409545) | 9.565657 / 10.191392 (-0.625735) | 0.141247 / 0.680424 (-0.539176) | 0.015026 / 0.534201 (-0.519175) | 0.299905 / 0.579283 (-0.279378) | 0.267667 / 0.434364 (-0.166697) | 0.320661 / 0.540337 (-0.219676) | 0.427368 / 1.386936 (-0.959568) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005905) | 0.003726 / 0.011008 (-0.007283) | 0.049776 / 0.038508 (0.011268) | 0.032733 / 0.023109 (0.009624) | 0.261387 / 0.275898 (-0.014511) | 0.280087 / 0.323480 (-0.043393) | 0.004351 / 0.007986 (-0.003634) | 0.002842 / 0.004328 (-0.001487) | 0.049440 / 0.004250 (0.045190) | 0.039585 / 0.037052 (0.002533) | 0.266331 / 0.258489 (0.007842) | 0.299643 / 0.293841 (0.005802) | 0.029649 / 0.128546 (-0.098897) | 0.010381 / 0.075646 (-0.065265) | 0.058596 / 0.419271 (-0.360676) | 0.033271 / 0.043533 (-0.010262) | 0.251070 / 0.255139 (-0.004069) | 0.272850 / 0.283200 (-0.010349) | 0.016728 / 0.141683 (-0.124955) | 1.146952 / 1.452155 (-0.305202) | 1.182602 / 1.492716 (-0.310114) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091673 / 0.018006 (0.073667) | 0.297228 / 0.000490 (0.296738) | 0.000197 / 0.000200 (-0.000003) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023174 / 0.037411 (-0.014237) | 0.078866 / 0.014526 (0.064341) | 0.088436 / 0.176557 (-0.088121) | 0.129650 / 0.737135 (-0.607485) | 0.091100 / 0.296338 (-0.205238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293882 / 0.215209 (0.078673) | 2.882667 / 2.077655 (0.805012) | 1.562949 / 1.504120 (0.058829) | 1.435104 / 1.541195 (-0.106090) | 1.450815 / 1.468490 (-0.017675) | 0.584090 / 4.584777 (-4.000687) | 0.984176 / 3.745712 (-2.761536) | 2.668740 / 5.269862 (-2.601121) | 1.766993 / 4.565676 (-2.798683) | 0.064710 / 0.424275 (-0.359565) | 0.005329 / 0.007607 (-0.002278) | 0.346008 / 0.226044 (0.119964) | 3.414576 / 2.268929 (1.145647) | 1.911388 / 55.444624 (-53.533236) | 1.660357 / 6.876477 (-5.216120) | 1.818628 / 2.142072 (-0.323444) | 0.659585 / 4.805227 (-4.145643) | 0.116980 / 6.500664 (-6.383684) | 0.041364 / 0.075469 (-0.034105) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005659 / 1.841788 (-0.836129) | 12.023761 / 8.074308 (3.949453) | 10.351086 / 10.191392 (0.159694) | 0.143261 / 0.680424 (-0.537162) | 0.016143 / 0.534201 (-0.518058) | 0.287793 / 0.579283 (-0.291490) | 0.123698 / 0.434364 (-0.310666) | 0.325241 / 0.540337 (-0.215097) | 0.418772 / 1.386936 (-0.968164) |\n\n</details>\n</details>\n\n\n"
] | 2024-06-04T11:48:32Z
| 2024-06-10T14:09:59Z
| 2024-06-10T14:03:55Z
|
MEMBER
| null | null | null |
Move `info_utils` errors to `exceptions` module.
Additionally rename some of them, deprecate the former ones, and make the deprecation backward compatible (by making the new errors inherit from the former ones).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6952/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6952/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6952.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6952",
"merged_at": "2024-06-10T14:03:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6952.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6952"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5378
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5378/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5378/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5378/events
|
https://github.com/huggingface/datasets/issues/5378
| 1,503,887,508
|
I_kwDODunzps5Zo4CU
| 5,378
|
The dataset "the_pile", subset "enron_emails" , load_dataset() failure
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52023469?v=4",
"events_url": "https://api.github.com/users/shaoyuta/events{/privacy}",
"followers_url": "https://api.github.com/users/shaoyuta/followers",
"following_url": "https://api.github.com/users/shaoyuta/following{/other_user}",
"gists_url": "https://api.github.com/users/shaoyuta/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shaoyuta",
"id": 52023469,
"login": "shaoyuta",
"node_id": "MDQ6VXNlcjUyMDIzNDY5",
"organizations_url": "https://api.github.com/users/shaoyuta/orgs",
"received_events_url": "https://api.github.com/users/shaoyuta/received_events",
"repos_url": "https://api.github.com/users/shaoyuta/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shaoyuta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaoyuta/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shaoyuta",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @shaoyuta. We are investigating it.\r\n\r\nWe are transferring the issue to \"the_pile\" Community tab on the Hub: https://huggingface.co/datasets/the_pile/discussions/4"
] | 2022-12-20T02:19:13Z
| 2022-12-20T07:52:54Z
| 2022-12-20T07:52:54Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When run
"datasets.load_dataset("the_pile","enron_emails")" failure

### Steps to reproduce the bug
Run below code in python cli:
>>> import datasets
>>> datasets.load_dataset("the_pile","enron_emails")
### Expected behavior
Load dataset "the_pile", "enron_emails" successfully.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.7.1
- Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- PyArrow version: 10.0.0
- Pandas version: 1.4.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5378/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5378/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6407
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6407/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6407/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6407/events
|
https://github.com/huggingface/datasets/issues/6407
| 1,991,514,079
|
I_kwDODunzps52tBff
| 6,407
|
Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1741779?v=4",
"events_url": "https://api.github.com/users/eawer/events{/privacy}",
"followers_url": "https://api.github.com/users/eawer/followers",
"following_url": "https://api.github.com/users/eawer/following{/other_user}",
"gists_url": "https://api.github.com/users/eawer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eawer",
"id": 1741779,
"login": "eawer",
"node_id": "MDQ6VXNlcjE3NDE3Nzk=",
"organizations_url": "https://api.github.com/users/eawer/orgs",
"received_events_url": "https://api.github.com/users/eawer/received_events",
"repos_url": "https://api.github.com/users/eawer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eawer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eawer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eawer",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I have encountered the same problem with `datasets-2.20.0`. \r\n\r\nI found the following workaround for this issue (including the fix from #6598):\r\n1. specify the AWS profile name in the `storage_options` instead of passing an existing session object\r\n2. use a custom `DownloadConfig` object to fix #6598\r\n3. pass the `storage_options` to the `DownloadConfig`\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\n# Fix for DownloadConfig from https://github.com/huggingface/datasets/issues/6598#issuecomment-1986699619\r\nclass ReviseDownloadConfig(DownloadConfig):\r\n def __post_init__(self, use_auth_token):\r\n if use_auth_token != \"deprecated\":\r\n warnings.warn(\r\n \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n FutureWarning,\r\n )\r\n self.token = use_auth_token\r\n\r\nstorage_options={\"profile\": \"my-aws-profile-name\"}\r\n\r\nds = load_dataset(\r\n \"parquet\", \r\n data_files={\"train\": DATA_PATH}, \r\n storage_options=storage_options,\r\n download_config=ReviseDownloadConfig(storage_options=storage_options)\r\n)\r\n```"
] | 2023-11-13T21:27:43Z
| 2024-07-30T12:35:09Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error
I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bucket for obvious reasons, but I'll try to give all possible outputs.
### Steps to reproduce the bug
```python
import s3fs
from datasets import load_dataset
from aiobotocore.session import get_session
DATA_PATH = "s3://bucket_name/path/validation.parquet"
fs = s3fs.S3FileSystem(session=get_session())
```
`fs.stat` returns the data, so we can say that fs is working and we have all permissions
```python
fs.stat(DATA_PATH)
# Returns:
# {'ETag': '"123123a-19"',
# 'LastModified': datetime.datetime(2023, 11, 1, 10, 16, 57, tzinfo=tzutc()),
# 'size': 312237170,
# 'name': 'bucket_name/path/validation.parquet',
# 'type': 'file',
# 'StorageClass': 'STANDARD',
# 'VersionId': 'Abc.HtmsC9h.as',
# 'ContentType': 'binary/octet-stream'}
```
```python
fs.storage_options
# Returns:
# {'session': <aiobotocore.session.AioSession at 0x7f9193fa53c0>}
```
```python
ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
```
<details>
<summary>Returns such error (expandable)</summary>
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[88], line 1
----> 1 ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
2156 verification_mode=verification_mode,
2157 try_from_hf_gcs=try_from_hf_gcs,
2158 num_proc=num_proc,
2159 storage_options=storage_options,
2160 )
2162 # Build dataset for splits
2163 keep_in_memory = (
2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2165 )
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1025 split_dict = SplitDict(dataset_name=self.dataset_name)
1026 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
-> 1027 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
1029 # Checksums verification
1030 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)
32 if not self.config.data_files:
33 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)
35 if isinstance(data_files, (str, list, tuple)):
36 files = data_files
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:565, in DownloadManager.download_and_extract(self, url_or_urls)
549 def download_and_extract(self, url_or_urls):
550 """Download and extract given `url_or_urls`.
551
552 Is roughly equivalent to:
(...)
563 extracted_path(s): `str`, extracted paths of given URL(s).
564 """
--> 565 return self.extract(self.download(url_or_urls))
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:420, in DownloadManager.download(self, url_or_urls)
401 def download(self, url_or_urls):
402 """Download given URL(s).
403
404 By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior.
(...)
418 ```
419 """
--> 420 download_config = self.download_config.copy()
421 download_config.extract_compressed_file = False
422 if download_config.download_desc is None:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in DownloadConfig.copy(self)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in <dictcomp>(.0)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (2 times), deepcopy at line 146 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:206, in _deepcopy_list(x, memo, deepcopy)
204 append = y.append
205 for a in x:
--> 206 append(deepcopy(a, memo))
207 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:238, in _deepcopy_method(x, memo)
237 def _deepcopy_method(x, memo): # Copy instance methods
--> 238 return type(x)(x.__func__, deepcopy(x.__self__, memo))
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (3 times), deepcopy at line 146 (3 times), deepcopy at line 172 (3 times), _reconstruct at line 271 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (1 times), deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:265, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
263 if deep and args:
264 args = (deepcopy(arg, memo) for arg in args)
--> 265 y = func(*args)
266 if deep:
267 memo[id(x)] = y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:264, in <genexpr>(.0)
262 deep = memo is not None
263 if deep and args:
--> 264 args = (deepcopy(arg, memo) for arg in args)
265 y = func(*args)
266 if deep:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:161, in deepcopy(x, memo, _nil)
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_contextvars.Context' object
```
</details>
### Expected behavior
If I choose to load the file from the public bucket with `anon=True` passed - everything works, so I expected loading from the private bucket to work as well
### Environment info
- `datasets` version: 2.14.6
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.19.1
- PyArrow version: 14.0.1
- Pandas version: 1.5.3
- s3fs version: 2023.10.0
- fsspec version: 2023.10.0
- aiobotocore version: 2.7.0
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6407/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6407/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4804
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4804/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4804/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4804/events
|
https://github.com/huggingface/datasets/issues/4804
| 1,332,630,358
|
I_kwDODunzps5PblNW
| 4,804
|
streaming dataset with concatenating splits raises an error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37621276?v=4",
"events_url": "https://api.github.com/users/Bing-su/events{/privacy}",
"followers_url": "https://api.github.com/users/Bing-su/followers",
"following_url": "https://api.github.com/users/Bing-su/following{/other_user}",
"gists_url": "https://api.github.com/users/Bing-su/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Bing-su",
"id": 37621276,
"login": "Bing-su",
"node_id": "MDQ6VXNlcjM3NjIxMjc2",
"organizations_url": "https://api.github.com/users/Bing-su/orgs",
"received_events_url": "https://api.github.com/users/Bing-su/received_events",
"repos_url": "https://api.github.com/users/Bing-su/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Bing-su/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bing-su/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Bing-su",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi! Only the name of a particular split (\"train\", \"test\", ...) is supported as a split pattern if `streaming=True`. We plan to address this limitation soon.",
"Hi, have you addressed this yet?",
"yes, same error occurs.\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# error\r\nrepo = \"nateraw/ade20k-tiny\"\r\ndataset = load_dataset(repo, split=\"train+validation\", streaming=True)\r\n```\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-3-a6ae02d63899>](https://localhost:8080/#) in <cell line: 5>()\r\n 3 # error\r\n 4 repo = \"nateraw/ade20k-tiny\"\r\n----> 5 dataset = load_dataset(repo, split=\"train+validation\", streaming=True)\r\n\r\n1 frames\r\n[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)\r\n 1265 splits_generator = splits_generators[split]\r\n 1266 else:\r\n-> 1267 raise ValueError(f\"Bad split: {split}. Available splits: {list(splits_generators)}\")\r\n 1268 \r\n 1269 # Create a dataset for each of the given splits\r\n\r\nValueError: Bad split: train+validation. Available splits: ['train', 'validation']\r\n```\r\n\r\ngoogle colab, `datasets==2.12.0`\r\n```\r\n- huggingface_hub version: 0.14.1\r\n- Platform: Linux-5.10.147+-x86_64-with-glibc2.31\r\n- Python version: 3.10.11\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: /root/.cache/huggingface/token\r\n- Has saved token ?: False\r\n- Configured git credential helpers: \r\n- FastAI: 2.7.12\r\n- Tensorflow: 2.12.0\r\n- Torch: 2.0.0+cu118\r\n- Jinja2: 3.1.2\r\n- Graphviz: 0.20.1\r\n- Pydot: 1.4.2\r\n- Pillow: 8.4.0\r\n- hf_transfer: N/A\r\n- gradio: N/A\r\n- ENDPOINT: https://huggingface.co/\r\n- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub\r\n- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets\r\n- HF_TOKEN_PATH: /root/.cache/huggingface/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n```\r\n",
"Hi!, still not fixed this, the truth is that it is an important update for what we want to train the entire dataset because we want to train fast, also should be enabled the function \"[train:18%]\" for streaming"
] | 2022-08-09T02:41:56Z
| 2023-11-25T14:52:09Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
streaming dataset with concatenating splits raises an error
## Steps to reproduce the bug
```python
from datasets import load_dataset
# no error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation")
```
```python
from datasets import load_dataset
# error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation", streaming=True)
```
```sh
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-a6ae02d63899>](https://localhost:8080/#) in <module>()
3 # error
4 repo = "nateraw/ade20k-tiny"
----> 5 dataset = load_dataset(repo, split="train+validation", streaming=True)
1 frames
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1030 splits_generator = splits_generators[split]
1031 else:
-> 1032 raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
1033
1034 # Create a dataset for each of the given splits
ValueError: Bad split: train+validation. Available splits: ['validation', 'train']
```
[Colab](https://colab.research.google.com/drive/1wMj08_0bym9jnGgByib4lsBPu8NCZBG9?usp=sharing)
## Expected results
load successfully or throws an error saying it is not supported.
## Actual results
above
## Environment info
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0 (windows11 x64)
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4804/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4804/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7533
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7533/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7533/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7533/events
|
https://github.com/huggingface/datasets/pull/7533
| 3,015,075,086
|
PR_kwDODunzps6TpraJ
| 7,533
|
Add custom fingerprint support to `from_generator`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"events_url": "https://api.github.com/users/simonreise/events{/privacy}",
"followers_url": "https://api.github.com/users/simonreise/followers",
"following_url": "https://api.github.com/users/simonreise/following{/other_user}",
"gists_url": "https://api.github.com/users/simonreise/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simonreise",
"id": 43753582,
"login": "simonreise",
"node_id": "MDQ6VXNlcjQzNzUzNTgy",
"organizations_url": "https://api.github.com/users/simonreise/orgs",
"received_events_url": "https://api.github.com/users/simonreise/received_events",
"repos_url": "https://api.github.com/users/simonreise/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simonreise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonreise/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simonreise",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"This is great !\r\n\r\nWhat do you think of passing `config_id=` directly to the builder instead of just the suffix ? This would be a power user argument though, or for internal use. And in from_generator the new argument can be `fingerprint=` as in `Dataset.__init__()`\r\n\r\nThe `config_id` can be defined using something like `config_id = \"default-fingerprint=\" + fingerprint`\r\n\r\nI feel ike this could make the Dataset API more coherent if we avoid introducing a new argument while we can juste use `fingerprint=`"
] | 2025-04-23T19:31:35Z
| 2025-04-24T10:22:53Z
| null |
NONE
| null | null | null |
This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function.
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset processed in a generator function is large enough.
This PR allows user to pass a custom fingerprint (`dataset_id_suffix`) to be used as a suffix in a dataset name instead of the one generated by hashing the args.
This PR is a possible solution of #7513
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7533/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7533/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7533.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7533",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7533.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7533"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5893/events
|
https://github.com/huggingface/datasets/pull/5893
| 1,722,519,056
|
PR_kwDODunzps5RK40K
| 5,893
|
Load cached dataset as iterable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4",
"events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}",
"followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers",
"following_url": "https://api.github.com/users/mariusz-jachimowicz-83/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusz-jachimowicz-83/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariusz-jachimowicz-83",
"id": 10278877,
"login": "mariusz-jachimowicz-83",
"node_id": "MDQ6VXNlcjEwMjc4ODc3",
"organizations_url": "https://api.github.com/users/mariusz-jachimowicz-83/orgs",
"received_events_url": "https://api.github.com/users/mariusz-jachimowicz-83/received_events",
"repos_url": "https://api.github.com/users/mariusz-jachimowicz-83/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariusz-jachimowicz-83/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusz-jachimowicz-83/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariusz-jachimowicz-83",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq Could you please look into that and review?",
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq I refactored the code. Could you please check is it what you requested?",
"@lhoestq Thanks for a review. Excellent tips. All tips applied. ",
"I think there is just PythonFormatter that needs to be imported in the test file and we should be good to merge",
"@lhoestq that is weird. I have linter error when I do it.",
"@lhoestq Now it should work properly.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006152 / 0.011353 (-0.005201) | 0.004169 / 0.011008 (-0.006839) | 0.097968 / 0.038508 (0.059460) | 0.028325 / 0.023109 (0.005216) | 0.308958 / 0.275898 (0.033060) | 0.341832 / 0.323480 (0.018352) | 0.005098 / 0.007986 (-0.002887) | 0.004721 / 0.004328 (0.000393) | 0.075067 / 0.004250 (0.070817) | 0.040514 / 0.037052 (0.003462) | 0.308355 / 0.258489 (0.049866) | 0.351063 / 0.293841 (0.057222) | 0.025261 / 0.128546 (-0.103285) | 0.008483 / 0.075646 (-0.067163) | 0.321219 / 0.419271 (-0.098052) | 0.058258 / 0.043533 (0.014725) | 0.312572 / 0.255139 (0.057433) | 0.330667 / 0.283200 (0.047467) | 0.091047 / 0.141683 (-0.050635) | 1.536541 / 1.452155 (0.084387) | 1.606566 / 1.492716 (0.113850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213234 / 0.018006 (0.195228) | 0.494801 / 0.000490 (0.494311) | 0.003764 / 0.000200 (0.003564) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023653 / 0.037411 (-0.013758) | 0.097176 / 0.014526 (0.082650) | 0.102961 / 0.176557 (-0.073595) | 0.164285 / 0.737135 (-0.572851) | 0.107586 / 0.296338 (-0.188753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421402 / 0.215209 (0.206193) | 4.195828 / 2.077655 (2.118174) | 1.884664 / 1.504120 (0.380544) | 1.679750 / 1.541195 (0.138556) | 1.719725 / 1.468490 (0.251235) | 0.552290 / 4.584777 (-4.032486) | 3.386337 / 3.745712 (-0.359375) | 1.771527 / 5.269862 (-3.498334) | 1.133327 / 4.565676 (-3.432349) | 0.067911 / 0.424275 (-0.356364) | 0.012572 / 0.007607 (0.004965) | 0.518004 / 0.226044 (0.291960) | 5.192381 / 2.268929 (2.923453) | 2.316032 / 55.444624 (-53.128592) | 1.993264 / 6.876477 (-4.883212) | 2.071009 / 2.142072 (-0.071063) | 0.655062 / 4.805227 (-4.150165) | 0.135488 / 6.500664 (-6.365177) | 0.067273 / 0.075469 (-0.008196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.217731 / 1.841788 (-0.624056) | 13.812927 / 8.074308 (5.738619) | 13.137886 / 10.191392 (2.946494) | 0.143102 / 0.680424 (-0.537322) | 0.016884 / 0.534201 (-0.517317) | 0.370106 / 0.579283 (-0.209178) | 0.392349 / 0.434364 (-0.042015) | 0.424501 / 0.540337 (-0.115837) | 0.509830 / 1.386936 (-0.877106) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006210 / 0.011353 (-0.005142) | 0.004215 / 0.011008 (-0.006793) | 0.076129 / 0.038508 (0.037621) | 0.027825 / 0.023109 (0.004716) | 0.403973 / 0.275898 (0.128075) | 0.441089 / 0.323480 (0.117609) | 0.005420 / 0.007986 (-0.002566) | 0.004870 / 0.004328 (0.000542) | 0.075558 / 0.004250 (0.071308) | 0.039464 / 0.037052 (0.002411) | 0.404329 / 0.258489 (0.145840) | 0.447213 / 0.293841 (0.153372) | 0.025877 / 0.128546 (-0.102669) | 0.008660 / 0.075646 (-0.066987) | 0.081849 / 0.419271 (-0.337422) | 0.044551 / 0.043533 (0.001018) | 0.379102 / 0.255139 (0.123963) | 0.403104 / 0.283200 (0.119905) | 0.094754 / 0.141683 (-0.046929) | 1.460772 / 1.452155 (0.008617) | 1.569531 / 1.492716 (0.076815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183923 / 0.018006 (0.165917) | 0.420708 / 0.000490 (0.420219) | 0.002091 / 0.000200 (0.001891) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026180 / 0.037411 (-0.011231) | 0.101529 / 0.014526 (0.087003) | 0.108739 / 0.176557 (-0.067818) | 0.160702 / 0.737135 (-0.576433) | 0.111739 / 0.296338 (-0.184600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448671 / 0.215209 (0.233462) | 4.469287 / 2.077655 (2.391632) | 2.244335 / 1.504120 (0.740215) | 2.107495 / 1.541195 (0.566301) | 2.224763 / 1.468490 (0.756272) | 0.554006 / 4.584777 (-4.030771) | 3.390109 / 3.745712 (-0.355603) | 1.744189 / 5.269862 (-3.525673) | 1.008515 / 4.565676 (-3.557161) | 0.067904 / 0.424275 (-0.356371) | 0.012243 / 0.007607 (0.004636) | 0.557635 / 0.226044 (0.331590) | 5.610383 / 2.268929 (3.341454) | 2.687326 / 55.444624 (-52.757298) | 2.405262 / 6.876477 (-4.471214) | 2.527300 / 2.142072 (0.385227) | 0.662282 / 4.805227 (-4.142945) | 0.136225 / 6.500664 (-6.364439) | 0.068136 / 0.075469 (-0.007334) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310791 / 1.841788 (-0.530997) | 14.370381 / 8.074308 (6.296072) | 14.122675 / 10.191392 (3.931283) | 0.152302 / 0.680424 (-0.528122) | 0.016624 / 0.534201 (-0.517577) | 0.359395 / 0.579283 (-0.219888) | 0.392131 / 0.434364 (-0.042233) | 0.423796 / 0.540337 (-0.116542) | 0.511387 / 1.386936 (-0.875549) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-23T17:40:35Z
| 2023-06-01T11:58:24Z
| 2023-06-01T11:51:29Z
|
CONTRIBUTOR
| null | null | null |
To be used to train models it allows to load an IterableDataset from the cached Arrow file.
See https://github.com/huggingface/datasets/issues/5481
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5893/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5893/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5893.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5893",
"merged_at": "2023-06-01T11:51:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5893.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5893"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7532
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7532/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7532/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7532/events
|
https://github.com/huggingface/datasets/pull/7532
| 3,009,546,204
|
PR_kwDODunzps6TW8Ss
| 7,532
|
Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4",
"events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}",
"followers_url": "https://api.github.com/users/Harry-Yang0518/followers",
"following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}",
"gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Harry-Yang0518",
"id": 129883215,
"login": "Harry-Yang0518",
"node_id": "U_kgDOB73cTw",
"organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs",
"received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events",
"repos_url": "https://api.github.com/users/Harry-Yang0518/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Harry-Yang0518",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-04-22T00:23:13Z
| 2025-04-22T00:23:13Z
| null |
NONE
| null | null | null |
This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for datasets stored in Arrow format.
This addition is based on the discussion in (https://github.com/huggingface/datasets/issues/7457), where users noted the absence of this variable in the documentation despite its functionality. The update adds a new section to `cache.mdx` that explains how to use `HF_DATASETS_CACHE` with an example.
This change aims to improve clarity and help users better manage their cache directories when working in shared environments or with limited local storage.
Closes #7457.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7532/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7532/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7532.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7532",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7532.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7532"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4805
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4805/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4805/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4805/events
|
https://github.com/huggingface/datasets/issues/4805
| 1,332,653,531
|
I_kwDODunzps5Pbq3b
| 4,805
|
Wrong example in opus_gnome dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gojiteji",
"id": 38291975,
"login": "gojiteji",
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gojiteji",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-08-09T03:21:27Z
| 2022-08-09T11:52:05Z
| 2022-08-09T11:52:05Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
I found that [the example on opus_gone dataset ](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary) doesn't work.
## Steps to reproduce the bug
```python
load_dataset("gnome", lang1="it", lang2="pl")
```
`"gnome"` should be `"opus_gnome"`
## Expected results
```bash
100%
1/1 [00:00<00:00, 42.09it/s]
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 8368
})
})
```
## Actual results
```bash
Couldn't find 'gnome' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/gnome/gnome.py
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4805/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4805/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7078
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7078/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7078/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7078/events
|
https://github.com/huggingface/datasets/pull/7078
| 2,433,270,271
|
PR_kwDODunzps52oq4n
| 7,078
|
Fix CI test_convert_to_parquet
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7078). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005262 / 0.011353 (-0.006090) | 0.003733 / 0.011008 (-0.007275) | 0.062619 / 0.038508 (0.024111) | 0.029491 / 0.023109 (0.006382) | 0.248947 / 0.275898 (-0.026951) | 0.278741 / 0.323480 (-0.044739) | 0.003173 / 0.007986 (-0.004813) | 0.002777 / 0.004328 (-0.001551) | 0.049344 / 0.004250 (0.045094) | 0.043103 / 0.037052 (0.006051) | 0.252402 / 0.258489 (-0.006087) | 0.288030 / 0.293841 (-0.005811) | 0.029425 / 0.128546 (-0.099121) | 0.012058 / 0.075646 (-0.063589) | 0.204509 / 0.419271 (-0.214762) | 0.035721 / 0.043533 (-0.007812) | 0.249121 / 0.255139 (-0.006018) | 0.272171 / 0.283200 (-0.011029) | 0.019515 / 0.141683 (-0.122168) | 1.130088 / 1.452155 (-0.322067) | 1.148856 / 1.492716 (-0.343860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093613 / 0.018006 (0.075607) | 0.300830 / 0.000490 (0.300340) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018381 / 0.037411 (-0.019030) | 0.061515 / 0.014526 (0.046989) | 0.074370 / 0.176557 (-0.102186) | 0.120751 / 0.737135 (-0.616384) | 0.074971 / 0.296338 (-0.221367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280499 / 0.215209 (0.065290) | 2.763114 / 2.077655 (0.685459) | 1.458696 / 1.504120 (-0.045424) | 1.331214 / 1.541195 (-0.209981) | 1.343157 / 1.468490 (-0.125333) | 0.732775 / 4.584777 (-3.852002) | 2.381485 / 3.745712 (-1.364227) | 2.930117 / 5.269862 (-2.339745) | 1.887617 / 4.565676 (-2.678059) | 0.080543 / 0.424275 (-0.343732) | 0.005136 / 0.007607 (-0.002471) | 0.336924 / 0.226044 (0.110879) | 3.343071 / 2.268929 (1.074142) | 1.823677 / 55.444624 (-53.620948) | 1.572300 / 6.876477 (-5.304176) | 1.564040 / 2.142072 (-0.578032) | 0.802369 / 4.805227 (-4.002858) | 0.135198 / 6.500664 (-6.365466) | 0.041499 / 0.075469 (-0.033970) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961202 / 1.841788 (-0.880585) | 11.275695 / 8.074308 (3.201387) | 9.508052 / 10.191392 (-0.683340) | 0.136921 / 0.680424 (-0.543503) | 0.014055 / 0.534201 (-0.520146) | 0.300076 / 0.579283 (-0.279208) | 0.263403 / 0.434364 (-0.170961) | 0.340871 / 0.540337 (-0.199466) | 0.433452 / 1.386936 (-0.953484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005683 / 0.011353 (-0.005670) | 0.003596 / 0.011008 (-0.007412) | 0.049913 / 0.038508 (0.011405) | 0.033275 / 0.023109 (0.010166) | 0.266011 / 0.275898 (-0.009887) | 0.295182 / 0.323480 (-0.028298) | 0.004336 / 0.007986 (-0.003649) | 0.002787 / 0.004328 (-0.001541) | 0.049035 / 0.004250 (0.044784) | 0.039833 / 0.037052 (0.002781) | 0.283520 / 0.258489 (0.025031) | 0.317437 / 0.293841 (0.023596) | 0.032578 / 0.128546 (-0.095968) | 0.011744 / 0.075646 (-0.063902) | 0.060174 / 0.419271 (-0.359097) | 0.034182 / 0.043533 (-0.009351) | 0.271821 / 0.255139 (0.016682) | 0.292189 / 0.283200 (0.008989) | 0.017045 / 0.141683 (-0.124638) | 1.127742 / 1.452155 (-0.324413) | 1.180621 / 1.492716 (-0.312095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093798 / 0.018006 (0.075792) | 0.310715 / 0.000490 (0.310226) | 0.000213 / 0.000200 (0.000013) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.076823 / 0.014526 (0.062298) | 0.088086 / 0.176557 (-0.088471) | 0.128926 / 0.737135 (-0.608210) | 0.089187 / 0.296338 (-0.207151) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293982 / 0.215209 (0.078773) | 2.930932 / 2.077655 (0.853277) | 1.576425 / 1.504120 (0.072305) | 1.445163 / 1.541195 (-0.096031) | 1.462118 / 1.468490 (-0.006372) | 0.725816 / 4.584777 (-3.858961) | 0.949767 / 3.745712 (-2.795945) | 2.832821 / 5.269862 (-2.437041) | 1.897064 / 4.565676 (-2.668612) | 0.079853 / 0.424275 (-0.344423) | 0.005352 / 0.007607 (-0.002255) | 0.344551 / 0.226044 (0.118507) | 3.442506 / 2.268929 (1.173578) | 1.938700 / 55.444624 (-53.505925) | 1.662205 / 6.876477 (-5.214272) | 1.769061 / 2.142072 (-0.373011) | 0.818089 / 4.805227 (-3.987139) | 0.134612 / 6.500664 (-6.366052) | 0.040419 / 0.075469 (-0.035050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032267 / 1.841788 (-0.809521) | 11.902598 / 8.074308 (3.828290) | 10.342229 / 10.191392 (0.150837) | 0.140509 / 0.680424 (-0.539915) | 0.015593 / 0.534201 (-0.518608) | 0.303326 / 0.579283 (-0.275957) | 0.127391 / 0.434364 (-0.306973) | 0.342095 / 0.540337 (-0.198243) | 0.438978 / 1.386936 (-0.947958) |\n\n</details>\n</details>\n\n\n"
] | 2024-07-27T05:32:40Z
| 2024-07-27T05:50:57Z
| 2024-07-27T05:44:32Z
|
MEMBER
| null | null | null |
Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix:
- #7074
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7078/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7078/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7078",
"merged_at": "2024-07-27T05:44:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7078"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5411
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5411/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5411/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5411/events
|
https://github.com/huggingface/datasets/pull/5411
| 1,523,297,786
|
PR_kwDODunzps5G23-T
| 5,411
|
Update docs of S3 filesystem with async aiobotocore
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5677912?v=4",
"events_url": "https://api.github.com/users/maheshpec/events{/privacy}",
"followers_url": "https://api.github.com/users/maheshpec/followers",
"following_url": "https://api.github.com/users/maheshpec/following{/other_user}",
"gists_url": "https://api.github.com/users/maheshpec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maheshpec",
"id": 5677912,
"login": "maheshpec",
"node_id": "MDQ6VXNlcjU2Nzc5MTI=",
"organizations_url": "https://api.github.com/users/maheshpec/orgs",
"received_events_url": "https://api.github.com/users/maheshpec/received_events",
"repos_url": "https://api.github.com/users/maheshpec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maheshpec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maheshpec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maheshpec",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008587 / 0.011353 (-0.002766) | 0.004613 / 0.011008 (-0.006395) | 0.100446 / 0.038508 (0.061938) | 0.029606 / 0.023109 (0.006497) | 0.302102 / 0.275898 (0.026204) | 0.357364 / 0.323480 (0.033884) | 0.007031 / 0.007986 (-0.000954) | 0.003593 / 0.004328 (-0.000735) | 0.078110 / 0.004250 (0.073860) | 0.035495 / 0.037052 (-0.001557) | 0.312522 / 0.258489 (0.054033) | 0.349336 / 0.293841 (0.055495) | 0.033719 / 0.128546 (-0.094827) | 0.011449 / 0.075646 (-0.064197) | 0.321760 / 0.419271 (-0.097512) | 0.043697 / 0.043533 (0.000165) | 0.304476 / 0.255139 (0.049337) | 0.333126 / 0.283200 (0.049926) | 0.092756 / 0.141683 (-0.048927) | 1.506734 / 1.452155 (0.054579) | 1.547381 / 1.492716 (0.054664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178177 / 0.018006 (0.160171) | 0.427814 / 0.000490 (0.427324) | 0.002505 / 0.000200 (0.002305) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023039 / 0.037411 (-0.014372) | 0.097113 / 0.014526 (0.082587) | 0.105014 / 0.176557 (-0.071543) | 0.141185 / 0.737135 (-0.595950) | 0.108843 / 0.296338 (-0.187495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424148 / 0.215209 (0.208939) | 4.247599 / 2.077655 (2.169944) | 2.130720 / 1.504120 (0.626600) | 1.916349 / 1.541195 (0.375154) | 1.831515 / 1.468490 (0.363025) | 0.688301 / 4.584777 (-3.896476) | 3.381749 / 3.745712 (-0.363963) | 2.900045 / 5.269862 (-2.369817) | 1.576248 / 4.565676 (-2.989428) | 0.082354 / 0.424275 (-0.341921) | 0.012200 / 0.007607 (0.004593) | 0.525753 / 0.226044 (0.299709) | 5.277672 / 2.268929 (3.008743) | 2.603870 / 55.444624 (-52.840754) | 2.296203 / 6.876477 (-4.580273) | 2.308014 / 2.142072 (0.165942) | 0.809056 / 4.805227 (-3.996171) | 0.148122 / 6.500664 (-6.352542) | 0.066097 / 0.075469 (-0.009372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.214059 / 1.841788 (-0.627728) | 13.671332 / 8.074308 (5.597024) | 13.694554 / 10.191392 (3.503162) | 0.151454 / 0.680424 (-0.528970) | 0.028514 / 0.534201 (-0.505687) | 0.391480 / 0.579283 (-0.187804) | 0.404499 / 0.434364 (-0.029865) | 0.458111 / 0.540337 (-0.082226) | 0.539454 / 1.386936 (-0.847482) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006795 / 0.011353 (-0.004558) | 0.004463 / 0.011008 (-0.006545) | 0.099542 / 0.038508 (0.061034) | 0.027588 / 0.023109 (0.004479) | 0.423023 / 0.275898 (0.147125) | 0.458459 / 0.323480 (0.134979) | 0.004981 / 0.007986 (-0.003005) | 0.003321 / 0.004328 (-0.001008) | 0.075727 / 0.004250 (0.071477) | 0.040541 / 0.037052 (0.003489) | 0.423724 / 0.258489 (0.165235) | 0.468334 / 0.293841 (0.174493) | 0.031732 / 0.128546 (-0.096814) | 0.011478 / 0.075646 (-0.064168) | 0.319807 / 0.419271 (-0.099465) | 0.041215 / 0.043533 (-0.002318) | 0.423060 / 0.255139 (0.167921) | 0.446157 / 0.283200 (0.162957) | 0.088884 / 0.141683 (-0.052799) | 1.553404 / 1.452155 (0.101250) | 1.607797 / 1.492716 (0.115080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208314 / 0.018006 (0.190307) | 0.411627 / 0.000490 (0.411137) | 0.002416 / 0.000200 (0.002216) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024641 / 0.037411 (-0.012770) | 0.101047 / 0.014526 (0.086521) | 0.108410 / 0.176557 (-0.068147) | 0.142860 / 0.737135 (-0.594276) | 0.112486 / 0.296338 (-0.183852) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485520 / 0.215209 (0.270311) | 4.864009 / 2.077655 (2.786355) | 2.541865 / 1.504120 (1.037745) | 2.339569 / 1.541195 (0.798374) | 2.378258 / 1.468490 (0.909768) | 0.698000 / 4.584777 (-3.886777) | 3.343137 / 3.745712 (-0.402575) | 1.842264 / 5.269862 (-3.427597) | 1.154707 / 4.565676 (-3.410969) | 0.082826 / 0.424275 (-0.341449) | 0.012379 / 0.007607 (0.004772) | 0.583335 / 0.226044 (0.357291) | 5.885934 / 2.268929 (3.617006) | 2.997769 / 55.444624 (-52.446856) | 2.653681 / 6.876477 (-4.222796) | 2.761656 / 2.142072 (0.619583) | 0.799883 / 4.805227 (-4.005344) | 0.151398 / 6.500664 (-6.349266) | 0.067445 / 0.075469 (-0.008024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292009 / 1.841788 (-0.549779) | 13.976180 / 8.074308 (5.901872) | 14.219469 / 10.191392 (4.028077) | 0.127810 / 0.680424 (-0.552614) | 0.016919 / 0.534201 (-0.517282) | 0.376401 / 0.579283 (-0.202882) | 0.388563 / 0.434364 (-0.045801) | 0.444904 / 0.540337 (-0.095433) | 0.532290 / 1.386936 (-0.854646) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-06T23:19:17Z
| 2023-01-18T11:18:59Z
| 2023-01-18T11:12:04Z
|
CONTRIBUTOR
| null | null | null |
[s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf).
Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5411/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5411/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5411",
"merged_at": "2023-01-18T11:12:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5411"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5462
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5462/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5462/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5462/events
|
https://github.com/huggingface/datasets/pull/5462
| 1,556,572,144
|
PR_kwDODunzps5Iglqu
| 5,462
|
Concatenate on axis=1 with misaligned blocks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008860 / 0.011353 (-0.002493) | 0.004564 / 0.011008 (-0.006444) | 0.101556 / 0.038508 (0.063048) | 0.030000 / 0.023109 (0.006891) | 0.304404 / 0.275898 (0.028506) | 0.366247 / 0.323480 (0.042767) | 0.007182 / 0.007986 (-0.000804) | 0.003583 / 0.004328 (-0.000746) | 0.079665 / 0.004250 (0.075415) | 0.036529 / 0.037052 (-0.000523) | 0.310998 / 0.258489 (0.052509) | 0.346954 / 0.293841 (0.053113) | 0.034098 / 0.128546 (-0.094448) | 0.011576 / 0.075646 (-0.064070) | 0.320448 / 0.419271 (-0.098824) | 0.043328 / 0.043533 (-0.000205) | 0.307317 / 0.255139 (0.052178) | 0.325071 / 0.283200 (0.041871) | 0.096406 / 0.141683 (-0.045277) | 1.540331 / 1.452155 (0.088176) | 1.589533 / 1.492716 (0.096817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011034 / 0.018006 (-0.006972) | 0.422066 / 0.000490 (0.421577) | 0.002409 / 0.000200 (0.002209) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023703 / 0.037411 (-0.013708) | 0.099935 / 0.014526 (0.085409) | 0.105966 / 0.176557 (-0.070591) | 0.142259 / 0.737135 (-0.594876) | 0.109327 / 0.296338 (-0.187011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418381 / 0.215209 (0.203172) | 4.177564 / 2.077655 (2.099909) | 1.880196 / 1.504120 (0.376076) | 1.669169 / 1.541195 (0.127974) | 1.725989 / 1.468490 (0.257499) | 0.689384 / 4.584777 (-3.895393) | 3.380963 / 3.745712 (-0.364749) | 1.884192 / 5.269862 (-3.385670) | 1.162409 / 4.565676 (-3.403268) | 0.082045 / 0.424275 (-0.342230) | 0.012575 / 0.007607 (0.004968) | 0.525824 / 0.226044 (0.299779) | 5.272574 / 2.268929 (3.003646) | 2.283492 / 55.444624 (-53.161132) | 1.947390 / 6.876477 (-4.929087) | 2.013790 / 2.142072 (-0.128283) | 0.806280 / 4.805227 (-3.998948) | 0.149267 / 6.500664 (-6.351397) | 0.066967 / 0.075469 (-0.008502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216511 / 1.841788 (-0.625277) | 13.869829 / 8.074308 (5.795521) | 14.189967 / 10.191392 (3.998575) | 0.148716 / 0.680424 (-0.531708) | 0.028324 / 0.534201 (-0.505877) | 0.390856 / 0.579283 (-0.188427) | 0.404389 / 0.434364 (-0.029975) | 0.456050 / 0.540337 (-0.084287) | 0.544139 / 1.386936 (-0.842797) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006727 / 0.011353 (-0.004626) | 0.004515 / 0.011008 (-0.006494) | 0.098791 / 0.038508 (0.060283) | 0.027596 / 0.023109 (0.004487) | 0.439066 / 0.275898 (0.163168) | 0.480555 / 0.323480 (0.157076) | 0.005066 / 0.007986 (-0.002920) | 0.004669 / 0.004328 (0.000341) | 0.075334 / 0.004250 (0.071084) | 0.039779 / 0.037052 (0.002726) | 0.439860 / 0.258489 (0.181371) | 0.480787 / 0.293841 (0.186946) | 0.031550 / 0.128546 (-0.096996) | 0.011668 / 0.075646 (-0.063978) | 0.317348 / 0.419271 (-0.101923) | 0.041312 / 0.043533 (-0.002220) | 0.442934 / 0.255139 (0.187795) | 0.463677 / 0.283200 (0.180478) | 0.090066 / 0.141683 (-0.051617) | 1.544152 / 1.452155 (0.091998) | 1.584455 / 1.492716 (0.091738) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224284 / 0.018006 (0.206278) | 0.406982 / 0.000490 (0.406492) | 0.000427 / 0.000200 (0.000227) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024914 / 0.037411 (-0.012497) | 0.102608 / 0.014526 (0.088082) | 0.106931 / 0.176557 (-0.069626) | 0.140828 / 0.737135 (-0.596308) | 0.112015 / 0.296338 (-0.184324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471078 / 0.215209 (0.255869) | 4.705742 / 2.077655 (2.628088) | 2.437442 / 1.504120 (0.933322) | 2.242768 / 1.541195 (0.701573) | 2.302158 / 1.468490 (0.833668) | 0.697314 / 4.584777 (-3.887462) | 3.357730 / 3.745712 (-0.387982) | 1.913306 / 5.269862 (-3.356556) | 1.173879 / 4.565676 (-3.391798) | 0.083257 / 0.424275 (-0.341018) | 0.012480 / 0.007607 (0.004873) | 0.573407 / 0.226044 (0.347362) | 5.728650 / 2.268929 (3.459721) | 2.868863 / 55.444624 (-52.575761) | 2.548640 / 6.876477 (-4.327837) | 2.596622 / 2.142072 (0.454549) | 0.805563 / 4.805227 (-3.999664) | 0.150860 / 6.500664 (-6.349804) | 0.068344 / 0.075469 (-0.007125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300368 / 1.841788 (-0.541420) | 13.920451 / 8.074308 (5.846143) | 14.222430 / 10.191392 (4.031038) | 0.152497 / 0.680424 (-0.527927) | 0.017415 / 0.534201 (-0.516786) | 0.378827 / 0.579283 (-0.200456) | 0.384165 / 0.434364 (-0.050199) | 0.439364 / 0.540337 (-0.100973) | 0.525710 / 1.386936 (-0.861226) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008482 / 0.011353 (-0.002871) | 0.004405 / 0.011008 (-0.006604) | 0.099662 / 0.038508 (0.061154) | 0.029062 / 0.023109 (0.005953) | 0.298329 / 0.275898 (0.022431) | 0.332837 / 0.323480 (0.009357) | 0.006760 / 0.007986 (-0.001225) | 0.003290 / 0.004328 (-0.001039) | 0.077659 / 0.004250 (0.073409) | 0.034745 / 0.037052 (-0.002307) | 0.303134 / 0.258489 (0.044644) | 0.346402 / 0.293841 (0.052561) | 0.033511 / 0.128546 (-0.095035) | 0.011464 / 0.075646 (-0.064183) | 0.322932 / 0.419271 (-0.096340) | 0.040697 / 0.043533 (-0.002836) | 0.301951 / 0.255139 (0.046812) | 0.328961 / 0.283200 (0.045761) | 0.084802 / 0.141683 (-0.056881) | 1.506247 / 1.452155 (0.054092) | 1.547631 / 1.492716 (0.054915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190370 / 0.018006 (0.172363) | 0.405786 / 0.000490 (0.405297) | 0.002196 / 0.000200 (0.001997) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022958 / 0.037411 (-0.014453) | 0.095736 / 0.014526 (0.081210) | 0.103684 / 0.176557 (-0.072872) | 0.138200 / 0.737135 (-0.598936) | 0.105618 / 0.296338 (-0.190721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415239 / 0.215209 (0.200030) | 4.147223 / 2.077655 (2.069569) | 1.850322 / 1.504120 (0.346202) | 1.662815 / 1.541195 (0.121620) | 1.671563 / 1.468490 (0.203073) | 0.693806 / 4.584777 (-3.890971) | 3.352938 / 3.745712 (-0.392774) | 1.849257 / 5.269862 (-3.420604) | 1.161603 / 4.565676 (-3.404074) | 0.081884 / 0.424275 (-0.342391) | 0.012726 / 0.007607 (0.005119) | 0.521105 / 0.226044 (0.295061) | 5.231910 / 2.268929 (2.962981) | 2.306073 / 55.444624 (-53.138551) | 1.950449 / 6.876477 (-4.926028) | 1.988433 / 2.142072 (-0.153640) | 0.811168 / 4.805227 (-3.994059) | 0.149960 / 6.500664 (-6.350704) | 0.064845 / 0.075469 (-0.010624) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221487 / 1.841788 (-0.620301) | 13.756534 / 8.074308 (5.682226) | 13.825369 / 10.191392 (3.633977) | 0.155641 / 0.680424 (-0.524783) | 0.028444 / 0.534201 (-0.505757) | 0.390364 / 0.579283 (-0.188919) | 0.397592 / 0.434364 (-0.036772) | 0.455905 / 0.540337 (-0.084433) | 0.534606 / 1.386936 (-0.852330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006281 / 0.011353 (-0.005071) | 0.004533 / 0.011008 (-0.006475) | 0.098328 / 0.038508 (0.059820) | 0.026998 / 0.023109 (0.003889) | 0.424814 / 0.275898 (0.148915) | 0.457653 / 0.323480 (0.134173) | 0.004617 / 0.007986 (-0.003368) | 0.003320 / 0.004328 (-0.001009) | 0.075884 / 0.004250 (0.071634) | 0.035865 / 0.037052 (-0.001187) | 0.431674 / 0.258489 (0.173185) | 0.468286 / 0.293841 (0.174445) | 0.031915 / 0.128546 (-0.096631) | 0.011680 / 0.075646 (-0.063967) | 0.319575 / 0.419271 (-0.099696) | 0.047792 / 0.043533 (0.004259) | 0.428191 / 0.255139 (0.173052) | 0.445657 / 0.283200 (0.162458) | 0.090464 / 0.141683 (-0.051218) | 1.465480 / 1.452155 (0.013326) | 1.548985 / 1.492716 (0.056268) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185671 / 0.018006 (0.167664) | 0.399274 / 0.000490 (0.398784) | 0.002822 / 0.000200 (0.002622) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025934 / 0.037411 (-0.011477) | 0.099480 / 0.014526 (0.084954) | 0.110264 / 0.176557 (-0.066293) | 0.140558 / 0.737135 (-0.596577) | 0.110832 / 0.296338 (-0.185507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473491 / 0.215209 (0.258282) | 4.722507 / 2.077655 (2.644852) | 2.456242 / 1.504120 (0.952122) | 2.255999 / 1.541195 (0.714804) | 2.300816 / 1.468490 (0.832326) | 0.698226 / 4.584777 (-3.886551) | 3.397296 / 3.745712 (-0.348416) | 2.741674 / 5.269862 (-2.528187) | 1.462103 / 4.565676 (-3.103573) | 0.082736 / 0.424275 (-0.341539) | 0.012183 / 0.007607 (0.004576) | 0.580144 / 0.226044 (0.354099) | 5.794351 / 2.268929 (3.525422) | 2.881201 / 55.444624 (-52.563423) | 2.544384 / 6.876477 (-4.332093) | 2.555227 / 2.142072 (0.413154) | 0.805849 / 4.805227 (-3.999378) | 0.151822 / 6.500664 (-6.348842) | 0.067477 / 0.075469 (-0.007992) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300224 / 1.841788 (-0.541564) | 13.595361 / 8.074308 (5.521053) | 13.967622 / 10.191392 (3.776230) | 0.129222 / 0.680424 (-0.551202) | 0.016939 / 0.534201 (-0.517262) | 0.375190 / 0.579283 (-0.204094) | 0.383511 / 0.434364 (-0.050853) | 0.437179 / 0.540337 (-0.103158) | 0.525674 / 1.386936 (-0.861262) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012364 / 0.011353 (0.001011) | 0.006098 / 0.011008 (-0.004911) | 0.158908 / 0.038508 (0.120400) | 0.039798 / 0.023109 (0.016689) | 0.383786 / 0.275898 (0.107888) | 0.533961 / 0.323480 (0.210481) | 0.012079 / 0.007986 (0.004094) | 0.006483 / 0.004328 (0.002155) | 0.109660 / 0.004250 (0.105410) | 0.048391 / 0.037052 (0.011339) | 0.447426 / 0.258489 (0.188937) | 0.477292 / 0.293841 (0.183451) | 0.066492 / 0.128546 (-0.062054) | 0.021155 / 0.075646 (-0.054492) | 0.474473 / 0.419271 (0.055202) | 0.063520 / 0.043533 (0.019987) | 0.444941 / 0.255139 (0.189802) | 0.450675 / 0.283200 (0.167475) | 0.129236 / 0.141683 (-0.012447) | 2.009362 / 1.452155 (0.557207) | 1.912067 / 1.492716 (0.419350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260384 / 0.018006 (0.242378) | 0.577654 / 0.000490 (0.577165) | 0.004977 / 0.000200 (0.004777) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028101 / 0.037411 (-0.009310) | 0.161680 / 0.014526 (0.147154) | 0.146107 / 0.176557 (-0.030450) | 0.173878 / 0.737135 (-0.563257) | 0.186149 / 0.296338 (-0.110190) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.689835 / 0.215209 (0.474626) | 6.775888 / 2.077655 (4.698234) | 2.885499 / 1.504120 (1.381379) | 2.486855 / 1.541195 (0.945660) | 2.540831 / 1.468490 (1.072341) | 1.328135 / 4.584777 (-3.256642) | 5.964983 / 3.745712 (2.219271) | 3.400713 / 5.269862 (-1.869149) | 2.423257 / 4.565676 (-2.142419) | 0.129767 / 0.424275 (-0.294508) | 0.017936 / 0.007607 (0.010328) | 0.909284 / 0.226044 (0.683239) | 8.778791 / 2.268929 (6.509863) | 3.890757 / 55.444624 (-51.553867) | 3.072116 / 6.876477 (-3.804360) | 3.085390 / 2.142072 (0.943318) | 1.571710 / 4.805227 (-3.233517) | 0.279290 / 6.500664 (-6.221374) | 0.087775 / 0.075469 (0.012306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.751223 / 1.841788 (-0.090564) | 20.313135 / 8.074308 (12.238827) | 22.793800 / 10.191392 (12.602408) | 0.296052 / 0.680424 (-0.384372) | 0.053420 / 0.534201 (-0.480781) | 0.600626 / 0.579283 (0.021343) | 0.634505 / 0.434364 (0.200142) | 0.724000 / 0.540337 (0.183663) | 0.869283 / 1.386936 (-0.517653) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014876 / 0.011353 (0.003523) | 0.008113 / 0.011008 (-0.002895) | 0.177038 / 0.038508 (0.138530) | 0.050825 / 0.023109 (0.027716) | 0.473989 / 0.275898 (0.198091) | 0.601058 / 0.323480 (0.277578) | 0.007536 / 0.007986 (-0.000450) | 0.006761 / 0.004328 (0.002432) | 0.105260 / 0.004250 (0.101010) | 0.073960 / 0.037052 (0.036908) | 0.447711 / 0.258489 (0.189222) | 0.609998 / 0.293841 (0.316157) | 0.061280 / 0.128546 (-0.067267) | 0.019370 / 0.075646 (-0.056276) | 0.510466 / 0.419271 (0.091194) | 0.062695 / 0.043533 (0.019162) | 0.436778 / 0.255139 (0.181639) | 0.489916 / 0.283200 (0.206717) | 0.137305 / 0.141683 (-0.004378) | 1.801554 / 1.452155 (0.349399) | 2.082409 / 1.492716 (0.589692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291304 / 0.018006 (0.273298) | 0.599041 / 0.000490 (0.598551) | 0.008017 / 0.000200 (0.007817) | 0.000127 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031243 / 0.037411 (-0.006169) | 0.139689 / 0.014526 (0.125163) | 0.138678 / 0.176557 (-0.037878) | 0.180458 / 0.737135 (-0.556677) | 0.149753 / 0.296338 (-0.146585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699692 / 0.215209 (0.484482) | 7.273327 / 2.077655 (5.195672) | 3.222650 / 1.504120 (1.718530) | 2.679424 / 1.541195 (1.138229) | 2.842378 / 1.468490 (1.373888) | 1.394633 / 4.584777 (-3.190143) | 6.379970 / 3.745712 (2.634258) | 5.944663 / 5.269862 (0.674801) | 3.105214 / 4.565676 (-1.460462) | 0.138790 / 0.424275 (-0.285485) | 0.014211 / 0.007607 (0.006604) | 0.815275 / 0.226044 (0.589230) | 8.549334 / 2.268929 (6.280405) | 3.754795 / 55.444624 (-51.689829) | 3.125222 / 6.876477 (-3.751255) | 3.269639 / 2.142072 (1.127566) | 1.464187 / 4.805227 (-3.341040) | 0.314557 / 6.500664 (-6.186107) | 0.107354 / 0.075469 (0.031885) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480793 / 1.841788 (-0.360995) | 16.770328 / 8.074308 (8.696019) | 18.054861 / 10.191392 (7.863469) | 0.198257 / 0.680424 (-0.482167) | 0.026493 / 0.534201 (-0.507708) | 0.489701 / 0.579283 (-0.089582) | 0.540890 / 0.434364 (0.106526) | 0.566675 / 0.540337 (0.026337) | 0.661918 / 1.386936 (-0.725018) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-25T12:33:22Z
| 2023-01-26T09:37:00Z
| 2023-01-26T09:27:19Z
|
MEMBER
| null | null | null |
Allow to concatenate on axis 1 two tables made of misaligned blocks.
For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each.
To do that, I slice the row blocks to re-align the blocks.
Fix https://github.com/huggingface/datasets/issues/5413
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5462/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5462/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5462",
"merged_at": "2023-01-26T09:27:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5462"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6417
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6417/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6417/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6417/events
|
https://github.com/huggingface/datasets/issues/6417
| 1,993,149,416
|
I_kwDODunzps52zQvo
| 6,417
|
Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57496007?v=4",
"events_url": "https://api.github.com/users/Davo00/events{/privacy}",
"followers_url": "https://api.github.com/users/Davo00/followers",
"following_url": "https://api.github.com/users/Davo00/following{/other_user}",
"gists_url": "https://api.github.com/users/Davo00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Davo00",
"id": 57496007,
"login": "Davo00",
"node_id": "MDQ6VXNlcjU3NDk2MDA3",
"organizations_url": "https://api.github.com/users/Davo00/orgs",
"received_events_url": "https://api.github.com/users/Davo00/received_events",
"repos_url": "https://api.github.com/users/Davo00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Davo00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Davo00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Davo00",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Very strange: `datasets-cli env`\r\n> \r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.9.0\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.13\r\n> - PyArrow version: 8.0.0\r\n> - Pandas version: 1.3.5\r\n\r\nAfter updating datasets and pyarrow on base environment, although I am using a different one called layoutLM\r\n\r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.14.6\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.18\r\n> - Huggingface_hub version: 0.17.3\r\n> - PyArrow version: 14.0.1\r\n> - Pandas version: 2.1.3",
"Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.",
"> Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.\r\n\r\nThanks for the info and the latest release, it seems this has also solved my issue. First run after the update worked and I am training right now :D\r\nWill close the Issu"
] | 2023-11-14T16:53:20Z
| 2023-11-16T20:23:41Z
| 2023-11-16T20:23:41Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab.
**Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb
**Error**: `ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.`
**Caused by**:
```
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
```
### Steps to reproduce the bug
Run the notebook provided, locally. If possible also on M1.
### Expected behavior
The cell where features are mapped to Array2D and Array3D should work without any issues.
### Environment info
Tried with Python 3.9 and 3.10 conda envs. Running Mac M1.
`pip show datasets`
> Name: datasets
Version: 2.14.6
Summary: HuggingFace community-driven open-source library of datasets
`pip list`
> Package Version
> ------------------------- ------------
> accelerate 0.24.1
> aiohttp 3.8.6
> aiosignal 1.3.1
> anyio 3.5.0
> appnope 0.1.2
> argon2-cffi 21.3.0
> argon2-cffi-bindings 21.2.0
> asttokens 2.0.5
> async-timeout 4.0.3
> attrs 23.1.0
> backcall 0.2.0
> beautifulsoup4 4.12.2
> bleach 4.1.0
> certifi 2023.7.22
> cffi 1.15.1
> charset-normalizer 3.3.2
> comm 0.1.2
> datasets 2.14.6
> debugpy 1.6.7
> decorator 5.1.1
> defusedxml 0.7.1
> dill 0.3.7
> entrypoints 0.4
> exceptiongroup 1.0.4
> executing 0.8.3
> fastjsonschema 2.16.2
> filelock 3.13.1
> frozenlist 1.4.0
> fsspec 2023.10.0
> huggingface-hub 0.17.3
> idna 3.4
> importlib-metadata 6.0.0
> IProgress 0.4
> ipykernel 6.25.0
> ipython 8.15.0
> ipython-genutils 0.2.0
> jedi 0.18.1
> Jinja2 3.1.2
> joblib 1.3.2
> jsonschema 4.19.2
> jsonschema-specifications 2023.7.1
> jupyter_client 7.4.9
> jupyter_core 5.5.0
> jupyter-server 1.23.4
> jupyterlab-pygments 0.1.2
> MarkupSafe 2.1.1
> matplotlib-inline 0.1.6
> mistune 2.0.4
> mpmath 1.3.0
> multidict 6.0.4
> multiprocess 0.70.15
> nbclassic 1.0.0
> nbclient 0.8.0
> nbconvert 7.10.0
> nbformat 5.9.2
> nest-asyncio 1.5.6
> networkx 3.2.1
> notebook 6.5.4
> notebook_shim 0.2.3
> numpy 1.26.1
> packaging 23.1
> pandas 2.1.3
> pandocfilters 1.5.0
> parso 0.8.3
> pexpect 4.8.0
> pickleshare 0.7.5
> Pillow 10.1.0
> pip 23.3
> platformdirs 3.10.0
> prometheus-client 0.14.1
> prompt-toolkit 3.0.36
> psutil 5.9.0
> ptyprocess 0.7.0
> pure-eval 0.2.2
> pyarrow 14.0.1
> pycparser 2.21
> Pygments 2.15.1
> python-dateutil 2.8.2
> pytz 2023.3.post1
> PyYAML 6.0.1
> pyzmq 23.2.0
> referencing 0.30.2
> regex 2023.10.3
> requests 2.31.0
> rpds-py 0.10.6
> safetensors 0.4.0
> scikit-learn 1.3.2
> scipy 1.11.3
> Send2Trash 1.8.2
> seqeval 1.2.2
> setuptools 68.0.0
> six 1.16.0
> sniffio 1.2.0
> soupsieve 2.5
> stack-data 0.2.0
> sympy 1.12
> terminado 0.17.1
> threadpoolctl 3.2.0
> tinycss2 1.2.1
> tokenizers 0.14.1
> torch 2.1.0
> tornado 6.3.3
> tqdm 4.66.1
> traitlets 5.7.1
> transformers 4.36.0.dev0
> typing_extensions 4.7.1
> tzdata 2023.3
> urllib3 2.0.7
> wcwidth 0.2.5
> webencodings 0.5.1
> websocket-client 0.58.0
> wheel 0.41.2
> xxhash 3.4.1
> yarl 1.9.2
> zipp 3.11.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57496007?v=4",
"events_url": "https://api.github.com/users/Davo00/events{/privacy}",
"followers_url": "https://api.github.com/users/Davo00/followers",
"following_url": "https://api.github.com/users/Davo00/following{/other_user}",
"gists_url": "https://api.github.com/users/Davo00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Davo00",
"id": 57496007,
"login": "Davo00",
"node_id": "MDQ6VXNlcjU3NDk2MDA3",
"organizations_url": "https://api.github.com/users/Davo00/orgs",
"received_events_url": "https://api.github.com/users/Davo00/received_events",
"repos_url": "https://api.github.com/users/Davo00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Davo00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Davo00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Davo00",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6417/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6417/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7537
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7537/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7537/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7537/events
|
https://github.com/huggingface/datasets/issues/7537
| 3,018,792,966
|
I_kwDODunzps6z7yAG
| 7,537
|
`datasets.map(..., num_proc=4)` multi-processing fails
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/faaany",
"id": 24477841,
"login": "faaany",
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"repos_url": "https://api.github.com/users/faaany/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"type": "User",
"url": "https://api.github.com/users/faaany",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-04-25T01:53:47Z
| 2025-04-25T05:53:29Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
^^^^^
File "/usr/local/lib/python3.12/dist-packages/multiprocess/queues.py", line 371, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 327, in loads
return load(file, ignore, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 313, in load
return Unpickler(file, ignore=ignore, **kwds).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 525, in load
obj = StockUnpickler.load(self)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 659, in _create_code
if len(args) == 16: return CodeType(*args)
^^^^^^^^^^^^^^^
TypeError: code() argument 13 must be str, not int
```
After upgrading dill to the latest 0.4.0 with "pip install --upgrade dill", it can pass. So it seems that there is a compatibility issue between dill 0.3.4 and python 3.11+, because python 3.10 works fine.
Is the dill deterministic issue mentioned in https://github.com/huggingface/datasets/blob/main/setup.py#L117) still valid? Any plan to unpin?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7537/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7537/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6599
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6599/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6599/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6599/events
|
https://github.com/huggingface/datasets/issues/6599
| 2,086,684,664
|
I_kwDODunzps58YEf4
| 6,599
|
Easy way to segment into 30s snippets given an m4a file and a vtt file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RonanKMcGovern",
"id": 78278410,
"login": "RonanKMcGovern",
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RonanKMcGovern",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic.",
"That's fair. Thanks"
] | 2024-01-17T17:51:40Z
| 2024-01-23T10:42:17Z
| 2024-01-22T15:35:49Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's easy to create a vtt file from an audio file. If there could be auto-segmenting, this would make the creation of datasets much faster.
### Your contribution
I have made a custom script to do this but it's not all that clean - uses librosa and pydub.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6599/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6599/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7435
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7435/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7435/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7435/events
|
https://github.com/huggingface/datasets/pull/7435
| 2,895,536,956
|
PR_kwDODunzps6NYUnr
| 7,435
|
Refactor `string_to_dict` to return `None` if there is no match instead of raising `ValueError`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4",
"events_url": "https://api.github.com/users/ringohoffman/events{/privacy}",
"followers_url": "https://api.github.com/users/ringohoffman/followers",
"following_url": "https://api.github.com/users/ringohoffman/following{/other_user}",
"gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ringohoffman",
"id": 27844407,
"login": "ringohoffman",
"node_id": "MDQ6VXNlcjI3ODQ0NDA3",
"organizations_url": "https://api.github.com/users/ringohoffman/orgs",
"received_events_url": "https://api.github.com/users/ringohoffman/received_events",
"repos_url": "https://api.github.com/users/ringohoffman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ringohoffman",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc: @lhoestq ",
"I am going to rebase #7434 onto this branch. Then we can merge this one first if you approve, and then #7434.",
"@lhoestq any thoughts here?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7435). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"It looks like I was unsafely asserting that `source_url_fields is not None` in `image.py`, `video.py` and `audio.py` (which did not correspond to the `except ValueError` like was there previously). I've changed it to handle `source_url_fields is None`.",
"Can we re-run CI on this one?",
"Sweet! These failures are looking spurious due to connectivity issues. Can the failing run be retried?",
"@lhoestq Sorry to double ping, but can this PR be reviewed? I think it is ready!\n"
] | 2025-03-04T22:01:20Z
| 2025-03-12T16:52:00Z
| 2025-03-12T16:52:00Z
|
CONTRIBUTOR
| null | null | null |
Making this change, as encouraged here:
* https://github.com/huggingface/datasets/pull/7434#discussion_r1979933054
instead of having the pattern of using `try`-`except` to handle when there is no match, we can instead check if the return value is `None`; we can also assert that the return value should not be `None` if we know that should be true
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7435/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7435/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7435.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7435",
"merged_at": "2025-03-12T16:51:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7435.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7435"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.